Google Warfare AI Project With Pentagon Is Disastrous For Humanity


Hundreds of Google employees are worried about the company’s partnership with the Pentagon in warfare AI technology, fearing it may be the biggest threat to humankind. The “questionable” alliance could result in a “disaster for humanity.”

Google employees wrote a letter to the company’s CEO, calling on the US tech giant to immediately pull out of a controversial program that many fear could be used for warfare. “We believe that Google should not be in the business of war,” the letter obtained by The New York Times and published earlier this week stated.

Gizmodo broke the news about Google’s partnership with the US Department of Defense (DoD) last month, adding that Project Maven, whose stated mission is to “accelerateDoD’s integration of big data and machine learning,” was established in April 2017.

The project will see Google developing AI surveillance to help the US military scrutinize video footage captured by US government drones “to detect vehicles and other objects, track their motions, and provide results to the Department of Defense.”

Google claims that the technology is human-friendly and is actually designed to “save lives” and “scoped to be for non-offensive purposes.” But Noel Sharkey, Emeritus professor of AI at Sheffield University, told that the fears of Google employees “are correct.”

The Maven program “is all about bringing AI to the immediate conflict zone” he argued, adding that Google may simply be too naïve here about the real use of its technology.”Once you start working with the military, you have no control over what they use your product for, and that’s very worrying,” Professor Sharkey said.

He cautioned that while drones now have human operators, which are at least “looking at the target, engaging with the target and trying to calculate its legitimacy,” things can take a drastic turn.

“If Google’s imagery is very good, they will stop using that operator, allow robots to go out on their own, find their own targets and kill them without human intervention. And this is going to be a disaster for humanity.”

And the anther concern here is the “privacy”. Google is a global company and is working for the Pentagon now, and the Pentagon is the United States. For me, in Britain, it means it’s a foreign power. How far will they slide into bed with the Pentagon? Sharkey said.

“Google owns most of our data, and I don’t want the Pentagon having my data.” The US Department of Defense spent a whopping $7.4 billion on AI-related areas last year, according to the Wall Street Journal.

The million-dollar question is whether “this going to lead to saving lives, or is it going to lead to more use of the technology, more drone strikes, more countries engaging in this use of the technology?”

It’s really “questionable,” physicist and arms control researcher at the University of North Carolina Dr. Mark Gubrud told. “It’s very exciting to see a movement arise among Google employees of concern about their company’s contribution in the world’s drift towards autonomous weapons, killer robots

According to the Intercept, Google is busy developing technology that will allow drone analysts to “interpret the vast image data vacuumed up from the military’s fleet of 1,100 drones to better target bomb strikes against the Islamic State.”

This April marks five years since the launch of Campaign to Stop Killer Robots. Its supporters object to “permitting machines to determine who or what to target on the battlefield,” pointing to numerous problems, including ethical and legal.

“Bold action is needed before technology races ahead and it’s too late to preemptively ban weapons systems that would make life and death decisions on the battlefield,” Steve Goose, arms division director at Human Rights Watch, and co-founder of the Campaign to Stop Killer Robots, said in a statement in November.

Leave a Reply