Human fascination in, and even obsession with, robots is nothing new. For many years people have imagined distant versions of the future where human interaction with different types of robots, androids, or other robotics products was a routine part of life both at work and at home. Sometimes these forward-looking scenarios focus on convenience, service, and speed. Much more often, however, when asked to contemplate a future with ubiquitous artificial intelligence (AI) technology imbedded alongside humans, thoughts stray into possible troubling or dark impacts on society. People worry about loss of humanity as technology predominates, or the possibility that robots could be misused or even gain sentience and have intentions to work against or harm humans.
In the past these scenarios, both of the positive advancement of society and of the potential for isolating, dangerous dystopia, were mostly relegated to science fiction books, Hollywood blockbuster movies, or what were seen as overactive imaginations or paranoid opinions of luddites. Now, however, the news is full every day of developments in AI technology that bring the once-imaginary potential of robots ever closer to present reality.
As technologists and business organizations consider the utility of advancement in AI, ethicists and corporate compliance programs must also consider the risk management issues that come along with robots and robotics. Technology which will have such a broad and deep impact on human life must be anticipated with thoughtful planning for the compliance risks which can arise. In particular the potential for sharing human traits with AI technology or imbedding AI technology in place of human judgment present provocative challenges.
- Anticipating increased interactions with androids – robots that look like humans and can speak, walk, and otherwise “act” like humans would – leads to the logical question of will humans have relationships with androids and vice versa? This would be not just transactional interactions like giving and receiving directions, or speaking back and forth on a script written to take advantage of or increase machine learning within the android. Rather, this could be intimate, emotionally-significant exchanges that build real connections. How can this be when only one side of the equation – the human – is assumed to be able to feel and think freely? While technical production of robots that appear credibly human-like is still beyond the reach of current science, and giving them a compelling human presence that could fool or attract a human is even further away, work on these tasks is well underway and it is not unreasonable to consider possible consequences of these developments. Will humans feel empathy and other emotions for androids? Can people ever trust robots that seem to be, but aren’t, people? Will the lines between “us” and “them” blur? The burgeoning field of human-robot interaction research seeks to answer these questions and develop technology which responds to and considers these tensions. Love in the Time of Robots
- On a similar note, when could machine learning become machine consciousness? Humans have embraced the usefulness of AI technologies which become smarter and more effective over time after they are exposed to more knowledge and experience. This is a great argument for deploying technology to support and improve efficiency and productivity. Everyone wants computers, networked devices, and other products that use advanced technology to work more accurately and easily. Machine consciousness, however, suggests independent sentience or judgment abilities, the potential of which unsettle humans. From a compliance and ethics perspective there is an extra curiosity inherent in this – what will be the morality of these machines if they achieve consciousness? Will they have a reliable code of ethics from which they do not stray and which comports with human societal expectations? Will they struggle with ethical decision-making and frameworks like humans do? Or will human and human-like practical ethics diverge completely? Can Robots be Conscious?
- In 2016, David Hanson of Hanson Robotics created a humanoid robot named Sophia. At his prompting during a live demonstration at the SXSW festival, Sophia answered his question “Do you want to destroy humans?… Please say ‘no’” by saying, “OK. I will destroy humans.” Despite this somewhat alarming declaration, during the demonstration Sophia also said that she was essentially an input-output system, and therefore would treat humans the way humans treated her. The intended purpose of Sophia and future robots like her is to provide assistance in patient care at assisted living facilities and in visitor services at parks and events. In October 2017, Saudi Arabia recognized the potential of the AI technology which makes Sophia possible by granting her citizenship ahead of its Future Investment Initiative event. A robot that once said it would ‘destroy humans’ just became a robot citizen in Saudi Arabia
- The development of humanoid robots will certainly become a bioethics issue in the future as the technology to take the human traits further becomes within reach. While there are so many compelling cases for how highly advanced AI could be good for the world, the risks of making them somehow too human will always be evocative and concerning to people. The gap between humans and human-like androids is called the uncanny valley, the space between organic and inorganic, natural and artificial, cognitive and learned. The suggestion that the future of human evolution could be “synthetic” – aided by or facilitated in the development androids and other robotics – presents a fascinating challenge to bioethics. Are humanoid robots objects or devices like computers or phones? It is necessary to consider the humans and androids in comparison to one other just as it is humans and animals, for example. This ethical dilemma gets to the root of what the literal meaning or definition of life is and what it takes for someone, or something, to be considered alive. Six Life-Like Robots That Prove The Future of Human Evolution is Synthetic
- One of the potential uses of AI technology which worries people the most is in autonomous weapons. The technology in fact already exists for weapons which can be used against people without human intervention or supervision in deploying them. Militaries around the world have been quick to develop and adopt weapon technology that uses remote computing techniques to fly, drive, patrol, and track. However, this established use of this technology is either for non-weaponized purposes or, in the case of drones, deployment of weapons with a human controller. Fully automating this technology would in effect be giving AI-powered machines the decision-making ability that could lead to killing humans. Many technologists and academics are warning governments to consider preventing large-scale manufacturing of these weapons via pre-emptive treaty or other international law. Ban on killer robots urgently needed, say scientists
As the diverse selection of stories above illustrates, the reach of robots, robotics, androids, and other developments within AI technology are certain to permeate and indeed redefine human life. This will not be in the distant or unperceived future. Rather, real impact from these advancements is even already starting to be seen, and there is only more to come. Governments, organizations, and individuals must make diligent risk assessment preparations to integrate this technology with human life in a harmonious and sustainable fashion.