Don't miss the latest stories
AI Has Become Ripe For Misuse, Warn Experts In New Report
By Yoon Sann Wong, 21 Feb 2018
Artificial intelligence has progressed rapidly in the last few years; with the rise of this technology comes myriad social and ethical considerations. From mind-reading AI to game bots that can crush top Dota 2 players and deepfake algorithms that convincingly replicate people’s faces including Donald Trump, the applications for the technology has become a great cause for concern within society.
In a previous address made by Google CEO Sundar Pichai, he warns that while “AI is one of the most important things that humanity is working on… it kills people too.”
“They learn to harness fire for the benefits of humanity, but we have to overcome its downsides,” explained Pichai. In the same vein, 26 authors from 14 institutions across academia, civil society, and industry have come together to publish the report titled The Malicious Use of Artificial Intelligence. It details three areas spanning digital, physical, and political where AI has become ripe for the picking by rogue states, terrorists, and lawbreakers.
In an interview with the BBC, Shahar Avin from Cambridge University’s Centre for the Study of Existential Risk, explained that the report concentrates on areas of AI that are already available and likely to become accessible within the next five years, rather than the distant future.
Avin emphasizes the particularly unsettling sector of reinforcement learning, where AI is trained to superhuman levels of cleverness sans human examples or supervision.
Image via Shutterstock
Instances where AI could turn rogue include technologies such as ‘AlphaGo’—made by Google’s DeepMind team, this AI is able to outsmart human ‘Go’ players and can be used by hackers to source for patterns in data and fresh exploits in codes, drones that can be installed with facial recognition software to target particular individuals, bots used to make pseudo but lifelike videos for political manipulation, and speech synthesis that enables hackers to mimic targets.
“It is often the case that AI systems don’t merely reach human levels of performance but significantly surpass it,” said Miles Brundage, a research fellow at Oxford University’s Future of Humanity Institute. “It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labour.”
Dr Seán Ó hÉigeartaigh, one of the co-authors of the report and the executive director of the Centre for the Study of Existential Risk, added, “Artificial intelligence is a game changer and this report has imagined what the world could look like in the next five to 10 years.”
“We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems—because the risks are real.”
“There are choices that we need to make now, and our report is a call to action for governments, institutions and individuals across the globe.”
“For many decades hype outstripped fact in terms of AI and machine learning. No longer. This report looks at the practices that just don’t work anymore—and suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable —and what type of laws and international regulations might work in tandem with this.”
You can access the full report here.
[via BBC, main image via Shutterstock]
More related news
Also check out these recent news