The UK’s Engineering and Physical Sciences Research Council and Arts and Humanities Research Council have devised ‘Five Ethics for Roboticists’ draft to stimulate debate on the social, legal and ethical issues in robotics. This takes Asimov’s Laws as a starting point:
- a robot may not injure a human being or, through inaction, allow a human being to come to harm;
- a robot must obey any orders given to it by humans, except where such orders would conflict with the first law;
- a robot must protect its own existence as long as such protection does not conflict with the first or second law.
However, the big departure was in deciding that it was the ethics of the roboticists, not the robots, that we needed to worry about. And it would only look up to 10 years away. The Five Ethics for Roboticists proposed are:
- robots are multi-use tools, and should not be designed solely or primarily to kill or harm humans, except in the interests of national security;
- humans, not robots, are responsible agents, so robots should be designed and operated as far as is practicable to comply with existing laws and fundamental human rights and freedoms, including privacy;
- robots are products which should be designed using processes which assure their safety and security;
- robots are manufactured artefacts, so they should not be designed in a deceptive way to exploit vulnerable users (their machine nature should be transparent);
- it should always be possible to find out who is legally responsibility for a robot.
Importantly, these “ethics” downplay the specialness of robots, treating them as tools and products to be designed and operated within legal and technical standards.
Information taken from a New Scientist article http://www.newscientist.com/article/mg21028111.100-five-roboethical-principles–for-humans.html by Alan Winfield, UWE, Bristol.
The full robotics ethics draft discussion document is at: http://www.epsrc.ac.uk/about/progs/mmme/publicengagement/Pages/principlesofrobotics.aspx