Google outlines their principles and objectives for using Artificial Intelligence

Mae Love
June 9, 2018

Google has banned the use of its artificial intelligence technology in weapons or surveillance violating internationally-accepted norms, said CEO Sundar Pichai in a blog post.

But Google's new limits appear to have done little to slow the Pentagon's technological researchers and engineers, who say other contractors will still compete to help develop technologies for the military and national defense.

Google insisted last week that its AI technology is not being used to help drones identify human targets, but told employees that it would no renew its contract after it expires in 2019.

But can Google realistically stick to its now-public principles? About a dozen Google employees reportedly resigned due to the company's involvement in the program.

Several Google employees, including former CEO and board of directors Chairman Eric Schmidt and Matt Cutts, who used to run Google's search spam division, have quit Google in the last few years to work for the Pentagon. In it, Google says that its principles "are not theoretical concepts", but rather "concrete standards" that'll "actively govern" its future AI work.

Kevin McCarthy Criticizes Mitch Mcconnell's Decision to Cancel Senate's August Recess
Americans have complained for years about the do-nothing politicians in Congress . He and Texas Sen. "We're not there confronting the president on tariffs".

Unhurried hurricanes: Study says tropical cyclones slowing
As storms move slower, they can unload more heavy rain and pound coastal areas longer, increasing damage potential. That's bad news for places like Houston, Texas, which is still rebuilding after catastrophic flooding.

Kate Spade's Husband Releases Statement: 'There Was No Indication And No Warning'
But shortly after Frances Beatrix's birth in 2005, the couple sold Kate Spade New York to concentrate on raising their daughter. The ruling comes two days after the 55-year-old iconic fashion designer was found dead in her New York City apartment.

Google released the guidelines soon after it said it would stop working with the military on the controversial Project Maven.

However, the search company will take on government contracts that it believes won't be used to hurt people (or at least will be beneficial enough to justify the harm).

Google has also made other moves that it may not have done in the past, such as block apps and tools that try to avoid censorship in other countries from using its cloud platform. Several employees said that they did not think the principles went far enough to hold Google accountable-for instance, Google's AI guidelines include a nod to following "principles of worldwide law" but do not explicitly commit to following global human rights law.

Pichai said Google would use artificial intelligence to make its products more useful.

Peter Asaro, vice chairman of the International Committee for Robot Arms Control, said this week that Google's backing off from the project was good news because it slows down a potential AI arms race over autonomous weapons systems. "Google is already battling with privacy issues when it comes to AI and data; I don't know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry". The Web giant - famous for its past "Don't be evil" mantra - is in the running for two multibillion-dollar Defense Department contracts for office and cloud services. "In the absence of positive actions, such as publicly supporting an global ban on autonomous weapons, Google will have to offer more public transparency as to the systems they build".

Other reports by

Discuss This Article