A few days ago, we have published news about Facebook and Google who decided to invest in artificial intelligence, getting academic lab and researchers to work for them. But it’s not all: Now, the United States Department of the Navy want to teach robots about moral decisions in order to be able to differentiate between right and wrong.To teach the robots, they will apply different statistical learning techniques on training scenarios. One scenario for example is in a battlefield: A robot medic responsible for helping wounded soldiers is ordered to transport urgently needed medication to a nearby field hospital. In the way, it encounters a Marine with a fractured leg. Should the robot abort the mission to assist the injured? Will it? The project is leaded by the Office of Naval Research (ONR), the office in the United States Department of the Navy that coordinates, executes, and promotes the science and technology programs of the U.S. Navy and Marine Corps through schools, universities, and government researches laboratories. They will work on this project in partnership with the Tufts University, located in Medford (Massachusetts), the Brown University, located in Providence (Rhode Island) and Rensselaer Polytechnic Institute in New York. The group wants to explore the challenges of teaching to autonomous robots the concept of good, bad, and its consequences. Here moral competence can be roughly thought about the ability to learn, reason with, act upon, and talk about the laws and societal conventions on which humans tend to agree.Zeroth law), still from Asimov and found in the book “Foundation and Earth” (1986).
0. A robot may not harm humanity, or through inaction allow humanity to come to harm.
1. A robot may not harm a human, or through inaction allow a human to come to harm, unless this interferes with the zeroth law.
2. A robot must obey orders given to it by a human being unless such orders interfere with the zeroth or first laws.
3. A robot must defend its own existence unless such defense interferes with the zeroth, first or second law
Otherwise robots may have the idea to do what Gort the robot had planned to do…