Secret AI Driven Warfare Technology Will Be Biggest Threat To Humanity

AI Driven Warfare Technology

Major World Nations are secretly developing their AI-driven warfare technology to prepare themselves for future business of war. Recently Pentagon’s highly classified research uses AI for scanning through a huge amount of data to look for signs of an imminent missile launch. However, the sensitive nature of the research means it is still shrouded in secrecy, multiple sources of Department of Defense revealed that several AI driven programs are currently underway, mostly aimed at using artificial intelligence to anticipate and warn of enemy missile launches

The Russians and the Chinese are also definitely pursuing these sorts of things as claimed by Rep. Mac Thornberry (R-Texas), the House Armed services Committee’s chairman and “Probably with greater effort in some ways than we have.”

The Trump administration has proposed tripling funding for one AI-driven missile program next year to $83 million. While $83 million may seem like a modest sum, it funds just one of many hush-hush programs and represents Washington’s growing interest in military AI technology.

The technology permits the Computer systems to scour through vast amounts of data, such as drone footage or satellite imagery, much faster and more accurately than humans. In one pilot program focused on North Korea, AI is used to locate and track mobile missiles that can be hidden in tunnels, forests, and caves. AI then assesses whether the activity constitutes an immediate threat, and alerts commanders.

Once signs of a missile launch are detected, the US government would then have time to either pursue diplomatic options or move in and destroy the missiles, ideally before they even leave the ground.

Also Earlier this week, Google canceled a controversial AI contract with the Pentagon after receiving backlash from its employees. In a letter to management, 3,000 Google staff said that the company “should not be in the business of war,” adding that working with the military goes against the tech giant’s “Don’t be evil” ethos.

Under the contract, Google and the Department of Defense worked together on ‘Project Maven,’ an AI program that would improve the targeting of drone strikes. The program would analyze video footage from drones, track the objects on the ground, and study their movement, applying the techniques of machine learning. Anti-drone campaigners and human rights activists complain that Maven would pave the way for AIs to determine targets on their own, completely removing humans from the ‘kill chain.’

There are other risks too. Developing AI technology could provoke an arms race of sorts with Russia or China. The technology is also still in its infancy and could make mistakes. US Air Force General John Hyten, the top commander of US nuclear forces, said that once such systems are operational, human safeguards will still be needed to control the ‘escalation ladder’ – the process through which a nuclear missile is launched.

“Artificial intelligence could force you onto that ladder if you don’t put the safeguards in,” Hyten said in an interview. “Once you’re on it, then everything starts moving.”

The dangers inherent in allowing AI to make life-or-death decisions were highlighted by an MIT study that found an AI neural network could be easily fooled into thinking a plastic turtle was actually a rifle. Hackers could theoretically exploit this vulnerability, and force an AI-driven missile system to attack the wrong target.

Regardless of the potential human cost of error, the Pentagon is pressing ahead with its research. Some officials believe that elements of the AI missile program could become operational by the early 2020s. We will soon be on the brink of extinction by developing these kinds of dangerous warfare technologies.

Leave a Reply

avatar