English

Killer robots far from being a realistic threat

06 Jan 2020

We often hear about doom scenarios where highly advanced killer machines outsmart humanity and dominate the world. How realistic are these scenarios?

A hyper-intelligent robot is not so easy to eliminate, as anyone who has seen a Terminator film knows. This science-fiction series – with Arnold Schwarzenegger in the role of killer robot – describes a bloody battle between humans and machines.

And where does it all go wrong? With artificial intelligence (AI). The first film shows the American army developing a self-learning computer system named Skynet. Less than a month later the system becomes self-aware and tries to exterminate mankind.

‘AI is already outperforming people in some tasks’

But Terminator is more than a film franchise with cult status. Since the first film came out in the 1980s, the series has shaped how we perceive artificial intelligence (AI). For the general public, the term is inextricably linked to the image of the homicidal robot, world domination by computers, and a post-human era. And some scientists actually perpetuate this image. In 1993, American mathematician and computer scientist Vernor Vinge wrote an essay, in which he stated that the exponential development of artificial intelligence would lead to systems outsmarting humans as early as 2030. How realistic is this scenario? Will AI ultimately – be it in ten or one hundred years – outwit us?

Garry Kasparov plays chess against the Deep Blue computer.

Deep blue

‘I always say AI refers to computer systems that can do things we thought only humans could do,’ says Tom Heskes, Professor of Artificial Intelligence at the iCIS (Institute for Computing and Information Sciences) at the Faculty of Science. There is, in fact, no such thing as AI, because as soon as we teach a computer system something ‘human’, we call it technology. Heskes: ‘Deep Blue, a chess-playing computer developed about twenty years ago is no longer considered to be AI. And I bet today’s AI will simply be called statistics in twenty years’ time.’

While Heskes represents the technical side of AI, at the AI department of the Faculty of Social Sciences cognition scientist Iris van Rooij investigates how human cognition can be ‘captured’ in algorithms. ‘The idea behind artificial intelligence is that you can describe intelligence using formulas or instructions, which you can then shape into a physical form,’ she says. For example in the form of computers. Or killer robots.

Heskes: ‘AI is already outperforming people in some tasks.’ Radiologists in hospitals are assisted in evaluating scans by artificial intelligence via deep learning (see box). AI can recognise metastases at least as well as medical experts. This was shown among others by Nijmegen Professor of Functional Image Analysis Bram van Ginneken two years ago in a publication in the scientific journal JAMA.

Holy Grail

Is it inevitable that the development of artificial intelligence leads to the construction of a killer robot? Absolutely not. Everything that is currently possible with AI falls under ‘narrow AI’, i.e. systems that focus on a single task. Real intelligence involves much more than this. Because no matter how good a computer is at playing the game Go, it can still only pull off the one trick. ‘The point is: there’s no self-awareness,’ says Professor Marcel van Gerven, Head of the AI Department of the Faculty of Social Sciences. ‘AI systems are just statistical models that deal with information in a very clever way.’

Cognitive Scientist Van Rooij illustrates the concept of intelligence. ‘Imagine you want to buy a house. You can translate this decision into a mathematical algorithm that weighs up the pros and cons. But it’s the scientists, not the computer system, who decide which factors are relevant – like location, number of rooms, or a child-friendly neighbourhood. The algorithm itself doesn’t think: How will I decide? Defining the problem is part of our human intelligence. And this is what’s so difficult to capture in an algorithm.’

It’s precisely this aspect of intelligence, self-awareness, that’s required to build a hyper-intelligent killer robot. Or, slightly less evil: a system that thinks like people, one we can genuinely make contact with. This is usually referred to as general AI. ‘These are AI systems that can – in the right environment – respond to information in an intelligent manner,’ explains Van Gerven. ‘The Holy Grail of AI research,’ says Van Rooij.

Curiosity

Are we already capable of creating systems like this? The answer is no. ‘Researchers are doing their best, but they’ve got no idea how to create a general AI system,’ says AI Professor Heskes. However, Van Gerven, Van Rooij and Heskes all believe general AI is in principle possible. Although researchers still have difficulty capturing some aspects of our cognition in formulas, progress is slowly being made.

Take curiosity. Van Gerven shows a video of a robot following with its ‘eyes’ a human hand holding a blue cube. ‘This is curiosity,’ says Van Gerven. Registering new information and wanting to know what it is. ‘Some researchers find the idea of general AI unrealistic,’ he says. But even just working towards it is interesting. ‘It helps us understand how our brain works. And it leads to more robust and more adaptive AI systems.’

Foto: Pixabay (edited)

Humanoid

What about all the advanced AI systems we read about in the media? Think of Sofia, a robot shaped like a woman’s head which engages in dialogue with enthusiastic presenters on TV. It’s just a voice control system like Siri, but with a face, explains Van Rooij. A similar example is a robot arm that can solve Rubik’s Cubes at incredible speed – she shows us a video. ‘This is misleading. The robot isn’t solving anything; the solution is pre-programmed.’ In summary, says Van Rooij, these are just technological tricks that look like AI to the general public.

How long before we develop general AI? None of the experts want to pin themselves down. You can’t predict this because creating self-awareness in computer systems requires some kind of revolutionary idea.

‘People think of algorithms as objective, that’s risky’

In the meantime, society already fears the potential excesses of general AI. Think of computers that can program themselves. Heskes gives an example of a potential AI disaster: ‘Imagine we ask computers to solve worldwide famine, without setting any restrictions. The ‘smart’ computers come up with a solution – exterminate all mankind – and put arms robots to work in automated factories that humans can’t access or control.’

Racism

These are the kinds of doom scenarios that the three researchers are tired of. ‘These scenarios are much less dangerous than the risks with existing AI,’ says Heskes. ‘For example, we can already create swarms of drones that shoot automatically. This danger is far more real than the idea of computers dominating the world.’

Or consider the fact that the data may be biased, says Van Rooij: ‘Amazon trained systems to select employees based on previous candidate selections. This led to racist and sexist algorithms. Risky, because people think of algorithms as objective.’ This is why the Nijmegen AI department is also investigating the social implications.

The fear of killer robots is completely unfounded, at least in the near future, says Heskes, putting things in perspective. ‘The odds are far greater that humanity will succumb to other threats than computer domination. Think of climate change, atomic bombs, or meteorites.’

Leave a comment

Vox Magazine

Independent magazine of Radboud University

read the latest Vox online!

Vox Update

an immediate, daily or weekly update with our articles in your mailbox!

Weekly
English
Sent!