Can robots be moral beings?

In short, we can make robots which display altruistic behaviour but they aren’t moral agents because we create them, says Joanna Bryson from Bath University. And what’s more we should not pretend that they are. 

Speaking at a London Futurists session today she said the key difference between robots and children is that, although we can guide the development of children, ultimately they are free agents.  We make robots entirely so the can’t be said to have moral agency.

But there are powerful forces at work. We humans have an overwhelming urge to impute agency on all sorts of animate and inanimate objects – think dogs and cats, and stuffed rabbits.

Soldiers using the bomb disposal robots in Iraq got very attached to them and would rescue them and want them repaired rather than being replaced by a new robot.

But, she says, there are serious moral hazards involved in treating robots as morally responsible. “Governments and manufacturers are going to want the robots to be responsible so they don’t have to pay when things go wrong.” Take the “killer robots” which are very much in the news at the moment. It isn’t the robots that are the killers, she argues. It is the politicians who have ultimate responsibility for the cost/benefit trade-offs programmed into them. But that is not how it is likely to be portrayed if something goes wrong.

joanna brysonDespite the apparent attractiveness of developing AI robots in our image, Bryson argues it probably doesn’t make any sense to try to make robots more like us.

“All the things that are important to us are because of our evolution, because we are apes.” Not only does imputing our values to robots not make sense, it may even be counterproductive. “It may not make them any better.”

And she doesn’t worry about crossing some magic line where one minute we don’t have AI and the next minute we do – the so-called intelligence explosion.

Neither does she believe that just because of AI the world is suddenly in danger of being turned into a giant paperclip factory as Nick Bostrom has suggested, pointing out that we are already doing that to the world, albeit making more than just paperclips.

She believes things won’t be like that and AI is simply getting better all the time (there are already AIs that pass the Turing Test, she argues). She does think, though, that we need to consider carefully how we want to proceed – much as we did with nuclear and chemical weapons.

For that reason she was involved with an initiative sponsored by the EPSRC and the AHRC to update Asimov’s famous laws of robotics.
Principles for designers, builders and users of robots

  • Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
  • Humans, not robots, are responsible agents. Robots should be designed; operated as far as is practicable to comply with existing laws & fundamental rights & freedoms, including privacy.
  • Robots are products. They should be designed using processes which assure their safety and security
  • Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
  • The person with legal responsibility for a robot should be attributed

Leave a Reply

Your email address will not be published. Required fields are marked *