AI could “go rogue” and be utilized by terrorists in five years

AI-could-go Rogue-utilized-terrorists-five-years-369331

In a shocking new report, called The Malicious Use of Artificial Intelligence (AI), experts warn that if breakthroughs in AI continue at the current pace then technology will soon become so powerful that it will outmaneuver many digital and physical defense systems calling for restrictions to be introduced immediately before it is too late.

Within five years AI could “go rogue” and be utilized by criminals. Lifelike videos and speech impersonation could be used to target individuals, while drones could be launched to physically attack a person, the report says.

Miles Brundage, research fellow at Oxford University’s Future of Humanity Institute, said: “AI will alter the landscape of risk for citizens, organisations, and states — whether it’s criminals training machines to hack or ‘phish’ at human levels of performance or privacy-eliminating surveillance, profiling, and repression — the full range of impacts on security is vast.

“It is often the case that AI systems don’t merely reach human levels of performance but significantly surpass them.

“It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labor.”

The 100-page report has contributors including digital rights group OpenAI, The Electronic Frontier Foundation and the Center for a New American Security, a national security think-tank. It warned advances may include speech synthesis to impersonate targets, facial recognition software being widely available and lifelike videos for political manipulation.

Dr. Seán Ó hÉigeartaigh, executive director of the Centre for the Study of Existential Risk and one of the co-authors, added: “Artificial intelligence is a game changer and this report has imagined what the world could look like in the next five to 10 years.

“We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems — because the risks are real. “There are choices that we need to make now, and our report is a call to action for governments, institutions, and individuals across the globe.”

“For many decades hype outstripped fact in terms of AI and machine learning. No longer. This report looks at the practices that just don’t work anymore and suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable — and what type of laws and international regulations might work in tandem with this.”

Professor Stephen Hawking warned that AI could be the “worst event in the history of our civilization”. While Prof Hawking admitted that AI could be used for good, he also stated that humans need to find a way to control it so that it does not become more powerful than us as “computers can, in theory, emulate human intelligence, and exceed it”.

Looking at the positives, the 75-year old said AI could help undo some of the damage that humans have inflicted on the natural world, help beat the disease and “transform” every aspect of society.

But, there are negatives that come with it. Mr. Hawking said: “Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know.

“So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.

“Unless we learn how to prepare for, and avoid the potential risks, AI could be the worst event in the history of our civilization.

“It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.”

Leave a Reply

avatar