Google CEO Sundar Pichai wrote in a blog post published Thursday that while the company will not develop artificial intelligence for weapons, it will continue to work with governments and the military in areas such as cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.
“These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe,” he wrote.
The post follows Alphabet’s shareholders meeting on Wednesday when management at Google’s parent company, which has voting control, squashed several proposals brought on by employees that would have forced executives to rethink a hiring practice that some say stifles innovation.
Google’s new guidelines assert that no company employee will design or deploy AI in applications or technologies that can cause harm, and only when the benefits substantially outweigh the risks where the company can add safety constraints.
Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people are also excluded, along with technologies that gather or use information for surveillance violating internationally accepted norms, and technologies whose purpose contravenes widely accepted principles of international law and human rights.
Pichai also published objectives for AI applications such as being socially beneficial, avoidance of creating or reinforcing unfair bias, being built and tested safely, being accountable to people, incorporating privacy design principals and upholding high standards of scientific excellence, among others.