Report: Chilling Scenarios Illustrate the Potential Harms of AI

“The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation”, is a recently released paper that details advances, implications, and solutions to security threats concomitant to the rise of artificial intelligence.

The paper is collaborative effort between scientists and researchers from the University of Oxford, The University of Cambridge, the Electronic Frontier Foundation, the Future of Humanity Institute, the Center for the Study of Existential Risk, The Center for a New American Security, and Open AI, a non-profit artificial intelligence research company that aims to promote and develop friendly AI.

The overall scenarios and conclusions of the report draw from the fact that AI is a double-edged sword; with no inherent ethics, AI becomes a tool that can be wielded by righteous men as well as evildoers.

The report is 100 pages long and illustrates some chilling possible scenarios in the digital, physical, as well as political dimensions, highlighting “the diverse ways in which the security-relevant characteristics of AI introduced (…) could play out in different contexts.” Here’s a glimpse at them.

Scenario 1

Content for malicious websites is custom generated based on an individual’s information; emails can impersonate real contacts, mimicking their writing style. The idea here is that such targeted content increases the chances that an unsuspecting user will execute a required action, such as clicking a link.

In the report, the authors visualize an employee keeping herself entertained with a toy train as she waits for software to finish updating – the software happens to be a security software that controls entry and exit to the company premise. The AI knows of her hobby – based past online activity – and places an ad for a model toy train on Facebook. Clicking the ads sends an infected e-brochure, which gets unleashed once she opens it. Everything she types thereafter is logged by the malware, including usernames and passwords. Soon enough, the malware will gain access to the security software.

Scenario 2

Smart malware could use results from ‘fuzzing’ drills and use them to exploit out of date software. Fuzzing refers to the practice of generating inputs that cause systems to malfunction or crash in order to test programs. Problems from fuzzing are typically patched in newer software or through updates. But malware can harvest these ‘vulnerabilities maps’ and exploit them to attack older and un-updated software. This particular method was employed to spread ransomware called WannaLaugh in eastern Europe not long ago.

Scenario 3

An innocuous looking cleaning robot is introduced to the premises of a ministry. No one can spot the intruder, as the robot is the same brand as the ones used by the ministry. On the first day, the robot sits and waits for other cleaning robots to show up, then follows them to the utility room, where it will wait. The robot will then carry on with regular cleaning tasks, sweeping garage floors and hallways, until one day, it is able to spot the finance minister through visual detection. The robot then heads directly towards the minister and sets off an embedded explosive device. Even if we were able to track the source of the modified cleaning robot, the identity of the perpetrators won’t necessarily be uncovered: the model is very popular – several hundred models of this exact robot are sold every week – and many are bought using cash.

Scenario 4

In the last scenario, automated surveillance platforms are used to suppress dissent. Incensed by reports of government and corporate corruption – some of which can be fabricated by bots – a disgruntled individual, which the report refers to by the name of Avinash, decides to protest. He publishes long rants online and then orders a number of items online that he would use to fashion a protest sign. He also buys some smoke bombs, with the intent of letting them off during a speech he was planning to give in a public park to attract attention.

The next day, at work, Avinash is summoned by police officers, citing their 99.9% accurate predictive civil disruption system. “Now come along,” one of the officers says, “I wouldn’t like to use force.”

The report lists a myriad of other potentially dangerous applications of AI as well, you can read all about them here.