Tag Archives: AI

Future Shock

There was a very perceptive article in the current issue of the Economist which argued, basically, that Moore’s Law is in sight of breaking down. The result, though, is maybe not what you might think. Progress may not necessarily just get slower; it is more likely to be much, much more unpredictable.

The reasons, according to the Economist are because these days there is so much more that is important than just the single chip in a single computer, among them the role of software, the cloud and new, specialised architectures optimised for particular tasks.

I think we can see some of this unpredictability unfolding in front of our eyes as Google’s Go-playing computer AlphaGo has beaten Lee Sedol, ranked number 4 in the world, in the first two of their best-of-five series. Go is seen as a special challenge to AI because it is very much more complex than chess and a “brute force” approach won’t work.

The really interesting thing about this match is that it was generally thought we were 10 years away from building a computer which could win at Go. AlphaGo surprised the world back in October last year when it won against Fan Hui who is ranked 633rd in the world. What has taken Lee Sedol by surprise is how much better the program has become since – we was apparently quite certain he could beat it.

Hold on to your seats – we could be in for some really quite startling surprises in the coming months and years.

 

 

Can robots be moral beings?

In short, we can make robots which display altruistic behaviour but they aren’t moral agents because we create them, says Joanna Bryson from Bath University. And what’s more we should not pretend that they are. 

Speaking at a London Futurists session today she said the key difference between robots and children is that, although we can guide the development of children, ultimately they are free agents.  We make robots entirely so the can’t be said to have moral agency.

But there are powerful forces at work. We humans have an overwhelming urge to impute agency on all sorts of animate and inanimate objects – think dogs and cats, and stuffed rabbits.

Soldiers using the bomb disposal robots in Iraq got very attached to them and would rescue them and want them repaired rather than being replaced by a new robot.

But, she says, there are serious moral hazards involved in treating robots as morally responsible. “Governments and manufacturers are going to want the robots to be responsible so they don’t have to pay when things go wrong.” Take the “killer robots” which are very much in the news at the moment. It isn’t the robots that are the killers, she argues. It is the politicians who have ultimate responsibility for the cost/benefit trade-offs programmed into them. But that is not how it is likely to be portrayed if something goes wrong.

joanna brysonDespite the apparent attractiveness of developing AI robots in our image, Bryson argues it probably doesn’t make any sense to try to make robots more like us.

“All the things that are important to us are because of our evolution, because we are apes.” Not only does imputing our values to robots not make sense, it may even be counterproductive. “It may not make them any better.”

And she doesn’t worry about crossing some magic line where one minute we don’t have AI and the next minute we do – the so-called intelligence explosion.

Neither does she believe that just because of AI the world is suddenly in danger of being turned into a giant paperclip factory as Nick Bostrom has suggested, pointing out that we are already doing that to the world, albeit making more than just paperclips.

She believes things won’t be like that and AI is simply getting better all the time (there are already AIs that pass the Turing Test, she argues). She does think, though, that we need to consider carefully how we want to proceed – much as we did with nuclear and chemical weapons.

For that reason she was involved with an initiative sponsored by the EPSRC and the AHRC to update Asimov’s famous laws of robotics.
Principles for designers, builders and users of robots

  • Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
  • Humans, not robots, are responsible agents. Robots should be designed; operated as far as is practicable to comply with existing laws & fundamental rights & freedoms, including privacy.
  • Robots are products. They should be designed using processes which assure their safety and security
  • Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
  • The person with legal responsibility for a robot should be attributed