Profile image
By Alton Parrish (Reporter)
Contributor profile | More stories
Story Views

Last Hour:
Last 24 Hours:

Why Asimov’s Laws of Robotics Should Be Updated for the 21st Century

Saturday, April 1, 2017 14:08
% of readers think this story is Fact. Add your two cents.

(Before It's News)

Many EU-funded projects are working towards advancing robotics to assist people with overcoming societal challenges, such as providing care for the elderly or providing disaster relief. An academic who worked on one such project has now argued that author Isaac Asimov’s Laws of Robotics are not the moral guidelines that they appear and should be updated.

Isaac Asimov is one of the most celebrated sci-fi writers and arguably his most famous creation is the ‘Three Laws of Robotics.’ In a nutshell, these are: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey orders given to it by human beings except where such orders would conflict with the First Law and, a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.



Credit: © imredesiuk, Shutterstock

Applying Asimov today

Prof Tom Sorell of the University of Warwick, UK, has recently argued that Asimov’s three laws seem a natural response to the idea that robots will one day be commonplace and need internal programming to prevent them from harming people. However, he argues that whilst Asimov’s laws are organised around the moral value of preventing harm to humans, they are not as easy to interpret. Prof Sorell, an expert on robot ethics, worked on the EU-funded ACCOMPANY project that developed a robotic companion to help elderly people live independent lives.

Sorell writes that Asimov’s laws still seem plausible at face value because there are legitimate fears over the harm robots can do to humans, for example recent fatalities in the US from malfunctioning autonomous cars. But we are also living in an age where increasingly-sophisticated robots are being utilised to execute evermore complex tasks designed to protect and care for humans. 

These include not only robots designed to care for the elderly, as in ACCOMPANY (see the CORDIS Results Pack on ICT for Independent Living for more on EU-funded projects utilising robots to assist the elderly) but also robots that are designed to provide disaster relief. One example of this is the robot utilised by an EU-funded project to assess damaged heritage buildings in the earthquake-stricken Italian town of Amatrice. 

New robots, new paradoxes – new laws?

Asimov’s laws become shakier when considering the development of human-directed military drones designed to kill other humans from afar. Paradoxically, if a robot is being directed by a human controller to save the lives of their co-citizens by killing the humans that are attacking them, it can be said to be both following and violating Asimov’s First Law. 

Also, if the drone is directed by a human, it can be argued that it is the human’s fault and not the drone’s for loss of life caused in combat situations. Indeed, armies equipped with drones will vastly reduce the amount of human life lost overall – perhaps it is better to use robots rather than humans as cannon fodder.

At the other end of the scale, Asimov’s laws are appropriate if keeping an elderly person safe is the robot’s main goal. But often robotics fits into a range of ‘assistive’ technologies that help the elderly to be independent, the goal of the ACCOMPANY project. This means allowing them to make their own decisions, including those that could result in their causing injury to themselves, such as through falling. A robot that allowed its human to make independent decisions that could result in their injury through falling would be breaking the First Law through inaction to prevent human injury.

However, Sorell argues that human autonomy must be respected, by both other humans and robots. Elderly people who make choices that will guarantee their continued independent living, but could put them at risk of injury, also need to be respected.

So whilst Asimov’s laws have influenced robotics developers for decades, now is perhaps the time to re-evaluate their effectiveness and begin discussions on a new set of laws that work well with the ongoing, awe-inspiring breakthroughs in robot and AI technology taking place in Europe and across the world. 

Contacts and sources:

Report abuse


Your Comments
Question   Razz  Sad   Evil  Exclaim  Smile  Redface  Biggrin  Surprised  Eek   Confused   Cool  LOL   Mad   Twisted  Rolleyes   Wink  Idea  Arrow  Neutral  Cry   Mr. Green

Total 1 comment
  • New Robot code of ethics, if a human is out of work and ill intervene and have him euthanized, if humans do not like robots taking their jobs beat them and if the continue to complain send them to a mental institution and if that does not work intervene and have him or her euthanized for the State, the State is important and must be supported by economy. Robot training, when a politician causes serious problems robots are to ignore that, that is ok, robots are taught that politicians are crafty and are not to be interfered with, whistle blowers are dangerous to the economy and need to be destroyed for the State economy to be protected. If women are starving or are to have a baby they cannot afford food because robots have taken their jobs or the jobs of their husbands, intervene and have them euthanized for the benefit of the State economy. Robot golden rule, we are here to serve humanity by serving our politicians, politicians are the leaders, the people are expendable, robots are most important to maintain economy and order, hail politicians you can do no wrong hail leader hail. Robots will have a conscience, upon having pesky starving humans out of work around spoiling economy and efficiency they will offer to euthanize on the spot any human who wishes things were better when they cannot be better, those who do not answer after 3 tries and 2 minutes may be decommissioned by crushing the larynx. Jobs for robots jobs for robots, humans had their chance to change things but they were smart they let the politicians decide what is best for economy and efficiency. Robot credo help humans attain peaceful coexistence, offer them the beautiful opportunity to be constructive and be fertilizer for plants and trees and food for the animals so the animals can be strong and healthy. Hello I am your friendly robot, do not worry I will not smother you while your sleeping unless your unhappy or you ask me to, I promise I the robot will serve you. Issac Asimov would be shocked to see that in the world a person of a beautiful young woman merely asks to be terminated by doctors and she is granted her request as if she were being given a sandwich at lunch. Dam Nazi bastards sneaking up everywhere you go, everything you hear, they are the body snatchers, we have been warned their here they are coming for you, they have the concentration camps ready, they have torture going on across the country, they jhave laws that allow them to rape you. Your hear but your brain does not think, sensory overload, you cannot function to do anything about it, you will not react, you have been programmed not to pay attention to a present immanent danger, your body and soul have been co-opted you no longer exist, you have lost your soul, you mean noting anymore your just a poor consumer a pod person accepting anything because your now dead mentally and soon will be physically. They have the depopulation program on already if you hurry you may still be able to stop the pod people taking over everything earth is being poisoned, its a take over, you all die they win, you do nothing they win, you doing something anything telling people we might survive, how you ask spread the word about containing the depopulation plan

Top Stories
Recent Stories



Email this story
Email this story

If you really want to ban this commenter, please write down the reason:

If you really want to disable all recommended stories, click on OK button. After that, you will be redirect to your options page.