Google outlined seven principles and guidelines for its artificial intelligence and pledged not to implement the technology in military weapons or surveillance projects. File Photo by lightpoet/Shutterstock |
By Daniel Uria, UPI
Google revealed a set of guidelines for its future artificial-intelligence development, including not allowing its technology to be used in military weapons.
CEO Sundar Pichai laid out seven ethical principles and guidelines on Thursday, outlining how Google plans to manage the application of artificial intelligence in both its commercial and noncommercial endeavors.
"These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions," Pichai wrote in a blog post.
The principals generally assert artificial intelligence should be safe, socially beneficial and avoid creating or reinforcing unfair bias.
In addition to the seven principles, the company also outlined a series of applications for artificial intelligence it won't pursue.
Google said it will avoid surveillance and information gathering technology that violates "internationally accepted norms."
It also pledged to not pursue "technologies that cause or are likely to cause overall harm" unless "the benefits substantially outweigh the risks," nor will it pursue "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people."
On Saturday, Google announced it won't renew a contract to provide AI technology to the U.S. Department of Defense to hasten analysis of drone footage by automatically interpreting video images.
About 4,000 Google employees signed a petition calling on the company to cancel the contract for the program, known as Project Maven, and demanded the company implement "a clear policy stating that neither Google nor its contractors will ever build warfare technology."
COMMENTS