• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Why Badly Trained AI Is a Bigger Threat Than a Robot Uprising

Why Badly Trained AI Is a Bigger Threat Than a Robot Uprising

© iStock

At the present level of AI development, humanity doesn’t have to worry about a machine uprising just yet. However, the use of improperly trained AIs in important fields and attempts to use it to exercise control over people’s lives may pose a real threat in the near future. This was the topic of a seminar on ‘The Unexpected Threats of AI’ recently hosted by the HSE University Laboratory of Transcendental Philosophy.

Professor Svetlana Klimova, who heads the laboratory, asked invited researcher and speaker Aleksandr Khomyakov about the dangers of the widespread adoption of AI. Mr. Khomyakov believes that the possibility of machines rising up and enslaving humanity have been greatly exaggerated, and that such an outcome is not possible at the current level of AI sophistication. ‘Of course, we can still get scared at the idea our irons rising up to singe us all,’ he joked.

For now, AI remains just a program—the real threat comes from the people using it. ‘The dangers posed by someone using AI improperly are much graver than a hypothetical machine uprising,’ he explained. At the same time, we may not yet realize all the potential dangers of AI, and we run the risk of using it without fully understanding the consequences—much like how certain radioactive cosmetics were popular at the start of the 20th century.

Mr. Khomyakov noted that European countries are examining the possibility of restricting the use of AI in certain fields. Russian lawmakers are eager to outlaw its use, but the researcher believes that this could stunt development in the field. It is important to strike a balance between progress and preventing the potential negative consequences of implementing AI. The key, he believes, is to take preventative action.

Another potential problem is the emergence of AIs capable of creating and disseminating texts that are hard to tell apart from those written by humans. A bot powered by a version of the GPT program successfully impersonated a human for several weeks. Sometimes, it is impossible to distinguish AI-written texts from the real thing. A machine could even manipulate the emotions of the people talking to it. Such bots could post comments on social media to influence the information space, nudge people into taking certain actions and promote certain opinions, Mr. Khomyakov explained.

The scariest thing is that some talented programmer with unclear motivations could be in control of thousands of messages a second. That’s the real threat: that someone might try to use AI to manipulate people

More problems could arise if an AI were given some measure of decision-making responsibility without properly assessing its capabilities. For example, if a doctor relied on an AI to analyze X-ray scans, any mistaken conclusions it made could have serious consequences. Using AI to filter calls to emergency services is also risky. ‘If an AI misinterpreted what a caller was saying in a stressful situation, it could endanger lives,’ the expert explained.

This does not mean that there are no suitable applications of AI. In fact, greater use of the technology in the future is inevitable. AI has large quantities of RAM, and can handle routine tasks that humans do not want to do. For example, it could allow traffic police to track not only cars, but people. After setting up cameras and an AI, such a system could operate by itself. Naturally, this raises ethical questions.

© iStock

In some countries, scandals have arisen around AI-assisted hiring practices that seem to disfavour black and Asian applicants. In cases where driverless cars have caused accidents with human casualties, it was unexpectedly uncovered that the AI’s training programme did not include ambulances, overturned vans, or parked fire engines.

According to Mr. Khomyakov, it is important to remember that AIs are statistics-based programs that are 95% accurate at most—meaning that there is a 5% chance of errors occurring. He posed the question: ‘Are we willing to accept such a high degree of probability when human lives are on the line?’

In his opinion, another danger of the mass implementation of AI is widespread unemployment, particularly among those with fewer qualifications and those engaged in routine work. This could lead to a sharp increase in antisocial behaviour and crime. There is also a threat to those working in creative professions; for example, in India, AI has been used to develop designs for clothes and shoes that are more appealing to consumers and have a greater range of variety. There are also websites and programs capable of writing poems (after being fed just two lines and a starting rhyme) and creating interior designs. Creative work no longer necessarily requires human talent, and this could cause problems.

Another danger is the possibility of people disengaging from the real world in favour of virtual or augmented worlds. Technology may allow people to create virtual environments and populate them with AI characters of their choosing. ‘An AI could fall in love with you. What young man would turn that down?!’ Mr. Khomyakov said. After all, an AI wouldn’t ‘argue, ask you for money, or get mad with you.’

He added that there are already existing headsets and costumes capable of giving small electric charges to users in order to create various physical and even emotional sensations.

We’re losing touch with reality and spending more and more time talking to people on video chats and social networks. It’s getting harder to identify whether they are real people or virtual constructs

AIs capable of independent decision making could also pose a major threat in the future. According to Mr. Khomyakov, this will become possible when such a system obtains a model of itself. At that point, an AI will be capable of refusing to follow instructions and could start making decisions independently. To avoid this, it is vital to develop preventative measures to stop AIs from going out of control.

Diana Gasparyan, Senior Research Fellow of the HSE University Laboratory of Transcendental Philosophy, believes that some of the threats outlined by Mr. Khomyakov are more serious than others. She believes the danger of people abandoning reality is minimal, because talking to AIs isn’t interesting enough for people. It is possible that AI developers could try to fool people by creating the appearance of subjective virtual people to talk to.

According to Aleksandr Khomyakov, the fact that millions of people immerse themselves in video games reflects the dangers of virtualization. They realize that they are in a made-up world, but ‘they stay up all night playing games until their eyes are bloodshot because they get an emotional experience from them.’ He suggests that this may facilitate the development of lifelike characters that players can form emotional connections to.