How to Ensure that Robots Uphold Moral Norms?


Bill Gates, Elon Musk, Stephen Hawking and other famous tech figures have warned us: rapid development of artificial intelligence may threaten the future of humanity. Scientific predictions are unsettling: Vytautas Magnus University (VMU) Professor John Stewart Gordon says that it is only a question of time when robots will become as intelligent as people and eventually surpass us, possibly even in the next 50 years. However, we need to start preparing for it now.

Stories of robots outmatching humans only sound like science fiction: robots are already a significant part of the ongoing fourth industrial revolution and have radically changed the global economy. Application of robots and artificial intelligence is growing in education, healthcare, elderly care, and the military. Self-driving cars, drones and other innovations are gaining popularity. McKinsey Global Institute reports that representatives of more than 70 professions could entrust 90 percent of their activities to robots: these include various jobs in food industry, laboratories, post, and other fields.

Due to the increasing responsibility of robots, in the coming decades people must make decisions on the ethical and moral issues connected to it: how to ensure that robots remain ethical and uphold moral norms, and who should determine those norms? For instance, how should a self-driving car make a decision in the face of an unavoidable accident: drive straight and hit a pedestrian or make a turn and injure the driver? What will happen when these robots are able to detect enemy soldiers and target them for bombing, but the enemy will be hiding in a village among civilians?

“Machine ethics is a very recent field in applied ethics, which is concerned with two questions: first, to examine the moral status of artificially intelligent machines, and, also, to make robots ethical: to give them the means to make ethical decisions and act accordingly”, explains robot ethics expert, VMU Professor John Stewart Gordon, who is the head of the VMU research cluster of Applied Ethics.

The researcher claims that within the next 50 to 100 years artificial intelligence should become more or less comparable to humans: robots could be capable of making moral decisions and not only look but also think like us. “The idea is at some point to have machines that are able to process data and act intelligently, like the AI robot Data in Star Trek, and not only look like humans but are also able to act on their own, as in the popular TV series Westworld. I would say it is only a matter of time until intelligent life is able to do things in a better way than we humans do, but we must start to act now, engage in these issues as early as possible”, professor says.

Robot morality is often discussed in the context of drones, or unmanned aerial vehicles, and their growing use today. In the future, drones could function without any human control, and today they are already called the third revolution of military technologies, after gunpowder and nuclear arms.

At first glance, drones have numerous advantages. When soldiers are replaced by robots, human lives are saved, surveillance possibilities are increased, and there is no risk of fatigue after long hours of work. However, more and more scientists, lawyers and philosophers argue that robots should not be soldiers because they lack the human ability to evaluate and fully comprehend the meaning of taking a life.

“As a matter of the preservation of human morality, dignity, justice, and law we cannot accept an automated system making the decision to take a human life. And we should respect this by prohibiting autonomous weapon systems. When it comes to killing, each instance is deserving of human attention and consideration in light of the moral weight inherent in the active taking of a human life”, Prof. Peter Asaro, technology philosopher working in Stanford Law School, wrote in the International Review of the Red Cross.

Speaking about artificial intelligence in general, Prof. John Stewart Gordon holds the view that robots should be given as much autonomy as possible, but they should not be entrusted with responsibility in more sensitive areas, even if sometime they are able to make moral decisions better than humans themselves.

 “As human beings, we are biased, we have our interests and desires that always come into play when we make decisions. But to give up moral reasoning to AI would mean that we also lose something very essential to humans, namely that we are beings that make our own decisions, whether for the good or for the bad”, Prof. Gordon says.

Elon Musk, the CEO of SpaceX and Tesla, is also concerned about the threat of a robot uprising and even proposed a possible solution last month. Since the growth of AI is inevitable, people should keep up and improve along with it: human brains could be integrated with software that improves memory and enables direct interaction with computers. Wall Street Journal writes that Musk’s new company Neuralink will use the technology of neural lace to implant small electrodes in the brain which will treat epilepsy, depression and other disorders. Later it could also serve as a means for people to prevent a revolt of AI machines.

While Elon Musk and Silicon Valley innovators are seeking tech solutions, the scientists of robot ethics are deliberating the moral and philosophical consequences of the progress of artificial intelligence. Robot ethics unites very different sciences: technologies, robotics, ethics and moral philosophy. It is a very interdisciplinary field where scholars of not only robotics but also psychology, law, philosophy and other disciplines are cooperating.

Interdisciplinarity is also efficiently utilised in the research cluster of Applied Ethics, headed by Prof. Joseph Stewart Gordon at Vytautas Magnus University in Kaunas, Lithuania. The cluster brings together experts researching robot ethics and other subjects at the intersection of several sciences, such as smart law, ethics of the virtual world, elderly law and other issues.

Prof. John Stewart Gordon is the Head of Research Cluster of Applied Ethics at Faculty of Humanities at Vytautas Magnus University. He is also a member of editorial board of scientific journal “Bioethics”, member of editorial board and chief editor book series “Philosophy and Human Rights”. John has prepared and edited books about practical philosophy, has published several articles and special editions at the most important science journals and encyclopedias.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.