UPDATE 6/5: Google Plans Not to Renew Its Contract for Project Maven, a Controversial Pentagon Drone AI Imaging Program

Update on 6/5 with more info. See bottom of page.

Google Plans Not to Renew Its Contract for Project Maven, a Controversial Pentagon Drone AI Imaging Program

*

Google Cloud CEO Diane Greene announced the decision at a meeting with employees Friday morning, three sources told Gizmodo. The current contract expires in 2019 and there will not be a follow-up contract, Greene said. The meeting, dubbed Weather Report, is a weekly update on Google Cloud’s business.

Google would not choose to pursue Maven today because the backlash has been terrible for the company, Greene said, adding that the decision was made at a time when Google was more aggressively pursuing military work. The company plans to unveil new ethical principles about its use of AI next week. A Google spokesperson did not immediately respond to questions about Greene’s comments.

*

But internal emails reviewed by Gizmodo show that executives viewed Project Maven as a golden opportunity that would open doors for business with the military and intelligence agencies. The emails also show that Google and its partners worked extensively to develop machine learning algorithms for the Pentagon, with the goal of creating a sophisticated system that could surveil entire cities.

Despite the excitement over Google’s performance on Project Maven, executives worried about keeping the project under wraps. “It’s so exciting that we’re close to getting MAVEN! That would be a great win,” Fei-Fei Li, chief scientist for AI at Google Cloud, wrote in a September 24, 2017 email. “I think we should do a good PR on the story of DoD collaborating with GCP from a vanilla cloud technology angle (storage, network, security, etc.), but avoid at ALL COSTS any mention or implication of AI.”

Google is already battling with privacy issues when it comes to AI and data; I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry,” she added.

https://gizmodo.com/google-plans-not-to-renew-its-contract-for-project-m...

IBM, Amazon, and Microsoft bid on this project too, but Google won. Wasn’t it Google that said “You can make money without doing evil?” That would be #6 in the linked list below.

Ten things we know to be true

We first wrote these “10 things” when Google was just a few years old. From time to time we revisit this list to see if it still holds true. We hope it does—and you can hold us to that.

https://www.google.com/about/philosophy.html

Pretty much everyone who’s ever been on the internet has figured out that is laughable BS already.

UPDATE:

However, not everyone is as gung-ho about developing military AI. Earlier this week, Google canceled a controversial AI contract with the Pentagon after receiving backlash from its employees. In a letter to management, 3,000 Google staff said that the company “should not be in the business of war,” adding that working with the military goes against the tech giant’s “Don’t be evil” ethos.

*

Under the contract, Google and the Department of Defense worked together on ‘Project Maven,’ an AI program that would improve the targeting of drone strikes. The program would analyze video footage from drones, track the objects on the ground, and study their movement, applying the techniques of machine learning. Anti-drone campaigners and human rights activists complain that Maven would pave the way for AIs to determine targets on their own, completely removing humans from the ‘kill chain.’

There are other risks too. Developing AI technology could provoke an arms race of sorts with Russia or China. The technology is also still in its infancy, and could make mistakes. US Air Force General John Hyten, the top commander of US nuclear forces, said that once such systems are operational, human safeguards will still be needed to control the ‘escalation ladder’ – the process through which a nuclear missile is launched.

“[Artificial intelligence] could force you onto that ladder if you don’t put the safeguards in,” Hyten said in an interview. “Once you’re on it, then everything starts moving.”

The dangers inherent in allowing AI to make life-or-death decisions were highlighted by an MIT study that found an AI neural network could be easily fooled into thinking a plastic turtle was actually a rifle. Hackers could theoretically exploit this vulnerability, and force an AI-driven missile system to attack the wrong target.

Regardless of the potential human cost of error, the Pentagon is pressing ahead with its research. Some officials interviewed by Reuters believe that elements of the AI missile program could become operational by the early 2020s

https://www.rt.com/usa/428799-secret-pentagon-ai-project/

Share
up
0 users have voted.

Comments

snoopydawg's picture

So is the Bill of Rights. The 4th amendment doesn't do squat anymore to protect us from the government and all of their 800 private goons. And of course we can thank the never ending gift of 9/11 for that.

IMG_0719_3.JPG

up
0 users have voted.

Apparently holocaust denial is not an issue anymore. Lots of people are denying the one in Gaza with absolutely no repercussions.

joe shikspack's picture

a minor setback in eric schmidt's plan to rule the world.

up
0 users have voted.
Pluto's Republic's picture

"Ten things we know to be true." We first wrote these “10 things” when Google was just a few years old.

They're dead to me now.

up
0 users have voted.

when used by Google, what does the term 'new ethical principles' mean once you put the Google Decoder Glasses on? Or does it just get turned upside down into more Opposite Day stuff?

up
0 users have voted.

Psychopathy is not a political position, whether labeled 'conservatism', 'centrism' or 'left'.

A tin labeled 'coffee' may be a can of worms or pathology identified by a lack of empathy/willingness to harm others to achieve personal desires.