Updated

After abandoning its involvement in a Pentagon project, Google has vowed to never develop AI technologies that can be used for war or surveillance.

Google's new policy around AI development is focused on building the technologies responsibly and limiting any potential misuse, the company said in a blog post.

No Google AI technology will ever be used as a weapon or for surveillance, the policy states. In addition, the company will refuse to develop any AI projects that will "cause or are likely to cause overall harm." Only when the benefits "substantially outweigh the risks" will the company proceed, but with safeguards in place.

The policy comes as Google employees reportedly pressured the tech giant to cancel its ongoing participation in Project Maven, a Pentagon effort to use AI to analyze footage from aerial drones. The company claimed that the research was for "non-offensive purposes," but some employees were afraid it could one day be used in actual warfare. In response, they circulated an internal letter, arguing that "Google should not be in the business of war."

More From PCmag

Resistance to Project Maven was serious enough that at least a dozen staffers reportedly resigned in protest. To placate employees, Google promised a new ethics policy around AI development, which the company made public on Thursday.

Google Cloud CEO Diane Green confirmed that the company will not seek to renew its government contract for Project Maven. However, the new AI ethics policy doesn't spell an end to Google's involvement with the Pentagon. Far from it.

"We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas," Google's CEO Sundar Pichai said in a separate blog post.

"These include cybersecurity, training, military recruitment, veterans' healthcare, and search and rescue," he added. "These collaborations are important and we'll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe."

Whether the tech giant can prevent its technologies from being weaponized (or even if that's wrong) will be up for debate. But the new policy also lays out a roadmap for Google's AI development. For instance, the company's artificial intelligence will be built and tested for safety; they'll also be designed with privacy in mind, an apparent nod to the controversy surrounding Google Duplex, an upcoming feature in the company's voice assistant that can potentially trick people into thinking it's human. Under the new policy, Google's AI technologies will give "opportunity for notice and consent."

"While this is how we're choosing to approach AI, we understand there is room for many voices in this conversation," Pichai added in his blog post. "As AI technologies progress, we'll work with a range of stakeholders to promote thoughtful leadership in this area."

This article originally appeared on PCMag.com.