Considerations from bioethics are prevalent throughout scientific research. As bio-technology innovations advance in both science and medicine, research methodology standards and practices become more ethically complex. Bioethics is traditionally centered on the link between humans and the sciences. The far reach of bieothics into health and human sciences reflects how pervasive the ethical obligations and moral choices in scientific research can be. As humans continue to explore the far boundaries of existing science knowledge for their own benefit, these transformations to all areas of human life will also change the ethical choices and challenges involved.
Black Mirror’s fourth season continues the themes of the previous three series of the show. As discussed in this post, the show makes often uncanny connections between human life and technology, frequently covering the ways in which social media, AI, biometric devices, and other advanced technological systems and devices affect and change society. What makes Black Mirror so effective, and often so disturbing, is that in each of the anthologized stories it contains not only a vision of the future but also a warning about the disruptions that would happen to people along the way. The reality depicted in Black Mirror is like an amped-up version of the world that seems to be already nearly within reach, with technological advancements abound to make life easier or more entertaining. However, the point of view in the show is markedly dystopian, forcing viewers to consider the addictive or even dangerous influence that immersive technologies could have.
This is the fifth in a series of seven posts about regulatory compliance priorities and enforcement trends. The first post was about the Commodity Futures Trading Commission (CFTC). The second post was about the Federal Trade Commission (FTC). The third post was about the Securities & Exchange Commission (SEC). Last week’s post was about the Food & Drug Administration (FDA). Today’s post will be about the U.S. Department of Agriculture (USDA). Next week’s post, on Thursday January 25, will be about the Environmental Protection Agency (EPA). Finally, on Thursday February 1, the post will be about the Federal Communications Commission (FCC).
Human fascination in, and even obsession with, robots is nothing new. For many years people have imagined distant versions of the future where human interaction with different types of robots, androids, or other robotics products was a routine part of life both at work and at home. Sometimes these forward-looking scenarios focus on convenience, service, and speed. Much more often, however, when asked to contemplate a future with ubiquitous artificial intelligence (AI) technology imbedded alongside humans, thoughts stray into possible troubling or dark impacts on society. People worry about loss of humanity as technology predominates, or the possibility that robots could be misused or even gain sentience and have intentions to work against or harm humans.
In the past these scenarios, both of the positive advancement of society and of the potential for isolating, dangerous dystopia, were mostly relegated to science fiction books, Hollywood blockbuster movies, or what were seen as overactive imaginations or paranoid opinions of luddites. Now, however, the news is full every day of developments in AI technology that bring the once-imaginary potential of robots ever closer to present reality.
As technologists and business organizations consider the utility of advancement in AI, ethicists and corporate compliance programs must also consider the risk management issues that come along with robots and robotics. Technology which will have such a broad and deep impact on human life must be anticipated with thoughtful planning for the compliance risks which can arise. In particular the potential for sharing human traits with AI technology or imbedding AI technology in place of human judgment present provocative challenges.
Anticipating increased interactions with androids – robots that look like humans and can speak, walk, and otherwise “act” like humans would – leads to the logical question of will humans have relationships with androids and vice versa? This would be not just transactional interactions like giving and receiving directions, or speaking back and forth on a script written to take advantage of or increase machine learning within the android. Rather, this could be intimate, emotionally-significant exchanges that build real connections. How can this be when only one side of the equation – the human – is assumed to be able to feel and think freely? While technical production of robots that appear credibly human-like is still beyond the reach of current science, and giving them a compelling human presence that could fool or attract a human is even further away, work on these tasks is well underway and it is not unreasonable to consider possible consequences of these developments. Will humans feel empathy and other emotions for androids? Can people ever trust robots that seem to be, but aren’t, people? Will the lines between “us” and “them” blur? The burgeoning field of human-robot interaction research seeks to answer these questions and develop technology which responds to and considers these tensions. Love in the Time of Robots
On a similar note, when could machine learning become machine consciousness? Humans have embraced the usefulness of AI technologies which become smarter and more effective over time after they are exposed to more knowledge and experience. This is a great argument for deploying technology to support and improve efficiency and productivity. Everyone wants computers, networked devices, and other products that use advanced technology to work more accurately and easily. Machine consciousness, however, suggests independent sentience or judgment abilities, the potential of which unsettle humans. From a compliance and ethics perspective there is an extra curiosity inherent in this – what will be the morality of these machines if they achieve consciousness? Will they have a reliable code of ethics from which they do not stray and which comports with human societal expectations? Will they struggle with ethical decision-making and frameworks like humans do? Or will human and human-like practical ethics diverge completely? Can Robots be Conscious?
In 2016, David Hanson of Hanson Robotics created a humanoid robot named Sophia. At his prompting during a live demonstration at the SXSW festival, Sophia answered his question “Do you want to destroy humans?… Please say ‘no’” by saying, “OK. I will destroy humans.” Despite this somewhat alarming declaration, during the demonstration Sophia also said that she was essentially an input-output system, and therefore would treat humans the way humans treated her. The intended purpose of Sophia and future robots like her is to provide assistance in patient care at assisted living facilities and in visitor services at parks and events. In October 2017, Saudi Arabia recognized the potential of the AI technology which makes Sophia possible by granting her citizenship ahead of its Future Investment Initiative event. A robot that once said it would ‘destroy humans’ just became a robot citizen in Saudi Arabia
The development of humanoid robots will certainly become a bioethics issue in the future as the technology to take the human traits further becomes within reach.While there are so many compelling cases for how highly advanced AI could be good for the world, the risks of making them somehow too human will always be evocative and concerning to people. The gap between humans and human-like androids is called the uncanny valley, the space between organic and inorganic, natural and artificial, cognitive and learned. The suggestion that the future of human evolution could be “synthetic” – aided by or facilitated in the development androids and other robotics – presents a fascinating challenge to bioethics. Are humanoid robots objects or devices like computers or phones? It is necessary to consider the humans and androids in comparison to one other just as it is humans and animals, for example. This ethical dilemma gets to the root of what the literal meaning or definition of life is and what it takes for someone, or something, to be considered alive. Six Life-Like Robots That Prove The Future of Human Evolution is Synthetic
One of the potential uses of AI technology which worries people the most is in autonomous weapons. The technology in fact already exists for weapons which can be used against people without human intervention or supervision in deploying them. Militaries around the world have been quick to develop and adopt weapon technology that uses remote computing techniques to fly, drive, patrol, and track. However, this established use of this technology is either for non-weaponized purposes or, in the case of drones, deployment of weapons with a human controller. Fully automating this technology would in effect be giving AI-powered machines the decision-making ability that could lead to killing humans. Many technologists and academics are warning governments to consider preventing large-scale manufacturing of these weapons via pre-emptive treaty or other international law. Ban on killer robots urgently needed, say scientists
As the diverse selection of stories above illustrates, the reach of robots, robotics, androids, and other developments within AI technology are certain to permeate and indeed redefine human life. This will not be in the distant or unperceived future. Rather, real impact from these advancements is even already starting to be seen, and there is only more to come. Governments, organizations, and individuals must make diligent risk assessment preparations to integrate this technology with human life in a harmonious and sustainable fashion.
Black Mirror is a very popular US-UK television science fiction series. It originally aired on Channel 4 in the UK and is now released and broadcasted by the subscription video streaming service Netflix. The series is anthology-style, with short seasons of stand-alone episodes which are like mini films. Most of the episodes of the series touch upon the dominance of and overreach into human life by technology, such as social media, AI, and other advanced, immersive systems and devices. The take offered is quite dramatic, often delving deeply into adverse psychological and sociological effects on modern society, taking a dark and even dystopian perspective.
While all the episodes of Black Mirror do depict a future reality, it is an immediate and accessible reality impacted by technology exceeding that which is currently possible but not so much as to be unthinkable. Indeed, the title of the show, Black Mirror, refers to current technology which is increasingly ubiquitous and addictive – television screens, computer monitors, and smartphone displays. The show both entices with the idea that many of these technological advancements could be convenient or novel or life-enhancing, while also warning that the obsessive and addictive aspects of technology could cause great harm and disruption if not developed and managed thoughtfully and carefully with the risks well in mind.
“The Entire History of You” (Series 1, Episode 3): In this episode, a couple struggling with mistrust and insinuations of infidelity make disastrous use of a common biometric – a “grain” implant everyone has that records everything they see, hear, and do. The recordings on the implants can be replayed via “re-dos.” This is used for surveillance purposes by security and management, as the memories can be played to an external video monitor for third parties to watch. Individuals can also watch the re-dos from their implants directly in their eyes, which allows them to repeatedly watch re-dos, often leading them to question and analyse the sincerity and credibility of people with whom they interact. People can also erase the records from their implants, altering the truthfulness of the recordings. This troubles the status of trust and honesty in society which has already in contemporary life been eroded by the influence of the internet.
“Be Right Back” (Series 2, Episode 1): In this episode, Martha is mourning her boyfriend, Ash, who died in a car accident. As she struggles to deal with his loss, her friend who has lso lost a partner recommends an online service that allows people to stay in touch with dead loved ones. The service crawls the departed person’s e-mail and social media profiles to create a virtual version of the person. After the machine learning advances enough by consuming and trying enough communications, it can also digest videos and photos by graduating from chatting via instant message to replicating the deceased’s voice and talking on the phone. At its most advanced, the service even allows a user to create an android version of the deceased that resembles him or her in every physical aspect and imitates the elements of the dead person’s personality that can be discovered by the online record. However, in all this there is no consideration given to the data privacy of the deceased person or to his or her consent to be exposed to machine learning and replicated in this manner, including even the physical android form.
“Nosedive” (Series 3, Episode 1): This is one of the most popular, critically-acclaimed episodes of the series, and one of the obvious reasons for this is that it focuses on social media and how it impacts friendships and interactions. The addictive aspects of social media in current times are already a hot topic in design ethics, driving people to question whether social media networks like Facebook or Twitter are good for the people who use them, and where to locate the line between entertainment and a fun way to connect and share, versus a platform with a potentially dark and abusive impact on users. In this episode, everyone is on social media and is subject to receiving ratings from virtually everyone they encounter. These ratings determine people’s standing both on social media and in the real world as well – controlling access to jobs, customer service, housing, and much more. Anxieties and aspirations about ratings drive everything people do and all the choices they make. “Addictive” has been met and surpassed, with social media having an absolutely pervasive impact in everyone’s lives.
“San Junipero” (Series 3, Episode 4): One of the most universally loved episodes of Black Mirror, San Junipero depicts the titular beach town which mysteriously appears to shift in time throughout the decades. Kelly and Yorkie both visit the town and have a romance. San Junipero turns out to be a simulated reality which exists only “on the cloud,” where people who are at the end of their lives or who have already died can visit to live in their prime again, forever if they so choose. In the real world, Kelly is elderly and in hospice care, while Yorkie is a comatose quadriplegic. Both eventually chose to be euthanized and uploaded to San Junipero to be together forever, after getting married first so that Kelly can give legal authorization to Yorkie to pass over. The bioethical considerations of such a reality are clear – in this society, assisted suicide is a legal normalcy, and part of patient care is planning one’s method of death and treatment path after death, which digitalization being a real option. All of the San Junipero simulations exist on huge servers, and judging by how many lights are flickering in the racks this seems to be a popular practice – but what about cybersecurity and information security of the simulations? What if the servers were hacked or damaged? This gives a new meaning to humanity and places an entirely different type of pressure on making sure that technology is used safely and the data stored on it is protected.
“Men Against Fire” (Series 3, Episode 5): This episode concerns the future of warfare in a post-apocalyptic world. Soldiers all have a biometric implant called MASS that augments reality, enhances their senses, and provides virtual reality experiences. One soldier’s implant begins to malfunction and he soon learns that the MASS is in fact altering his senses so that he will not see individuals he is told are enemy combatants as people. It turns out that the soldier is part of a eugenics program practicing worldwide genocide and the MASS is being used to deceive the solders and turn them into autonomous weapons who murder on command due to the augmentations and alterations to reality by the MASS. This storyline falls cannily close to many current concerns about the adoption of autonomous weapons that are not directed or monitored by humans, which are nearly within technological capability to be created and are the subject of international calls for appropriate supervision of and restraint in their development.
Black Mirror offers many interesting scenarios for analysis of and study by compliance and ethics professionals considering risk management related to the use of technology in organizations and society. As described above, surveillance, data privacy, consent, design ethics, autonomous weapons and other AI, bioethics, and cybersecurity are just a sampling of the issues invoked by episodes of the series.
Bioethics is a field of ethical thought and theory which focuses its debate on the relationship between society and biological sciences. These two sets of interests intersect and collide very frequently in medicine, where the impact of scientific advancement on people can be truly a life or death matter. Researchers, doctors, hospital organizations, medical service and product providers, and patients themselves all contend with bioethical dilemmas. With the ever-evolving advancement of AI and other technologies, medicine, like almost all areas of human life is being transformed and along with it, the ethical choices and challenges present are changing too.
As previously discussed in this blog’s coverage of whistleblowers in the pharmaceutical industry, the sales and marketing of prescription drugs is a practice full of risks for fraud and misconduct. Pharmaceutical companies paying or otherwise influencing doctors to recommend and prescribe their products to their patients is ripe for conflicts of interest issues. Doctors who might prescribe medication for any reason other than the most appropriate treatment protocol and wellness outlook for their patients, to whom they owe a high standard of professional care, pose great risk of causing both intentional and negligent harm. The risk of this is exceptionally troublesome when the doctors have histories of fraud or misconduct. Payments from pharmaceutical companies to doctors are legal and not unusual, but they are also certainly controversial and pose significant bioethical challenges to appropriate patient care. In this case, there is definitely a call for compliance controls and ethical decision-making incentives, in that the conduct not against any law or regulation but may certainly run afoul of society’s expectations or medical institutions’ business values: Drugmaker paid doctors with problem records to promote its pill
Traditional Chinese medicine, which uses herbs, plant, and animal parts to make teas or soups, has been relied upon as a popular remedy for centuries. While the practices of clinics offering these ingredients and instructions for using them as cures have been largely unchanged all this time, in recent years technological innovation has reached even into these farthest corners of medical practice. Some of these old-fashioned recipes have gotten a modern variation, turning them from culinary creations to formulations for injectable drugs. The risk lies in the possibility that the patient taking the drug might have an adverse reaction, as the injectable versions of these drugs contain many different compounds, making diagnosing an allergy or contamination very difficult. As major companies enter this market estimated at $13 billion in sales value, doctors are prescribing drugs that are largely unregulated, untested, and unknown. This presents a huge regulatory challenge to ensure public safety and to set supervisory standards for prescription and administration of the drugs: Patient Deaths Show Darker Side of Modern Chinese Medicine
Recently an elderly, unconscious patient showed up in Miami, Florida hospital with a shocking tattoo that read “DO NOT RESCUSCITATE.” Doctors and nurses found themselves struggling to resolve the ethical dilemma of determining their patient’s true desire for care and to what extent. In the state of Florida, an order to not resuscitate (DNR) is valid only if it is completed on an official form and printed on the designated yellow paper. This patient did end up having such a legal form, so when his condition deteriorated eventually, the valid DNR was honored. However, doctors did debate what reaction, if any, they owed to the tattoo and the patient’s evident choice to make his DNR wishes emphatically clear. The medical team in question here provided basic care, sought ethics advice, and got the support system of social workers involved to make a collaborative, respectful patient care plan. The question of what would motivate a patient to have such a tattoo, however, and the wide variety of medical and legal reactions it can provoke, presents an interesting bioethical dilemma in end-of-life care: What to Do When a Patient Has a ‘Do Not Resuscitate’ Tattoo
Continuing on this theme of patient care at the end of life, another compelling bioethical dilemma is in the provision of non-essential treatment in hospice. In this case, a patient wanted eye surgery to restore his vision for the last days of his life. He desired comfort, independence, and reconnection with his family before he died that being able to see again could uniquely give to him. Some care providers, however, would not find it acceptable to perform surgery on someone who would die only a few weeks later, incurring costs and risks to provide ultimately unnecessary treatment. The question of when and why to provide this sort of treatment to hospice patients has arguments for cost and efficiency on one side and dignity and compassion on the other: Should Eye Surgeons Fulfill A Dying Man’s Wish To See His Family?
During all surgeries and medical treatments, there is an ever-present risk that something could go wrong and the professionals performing the procedure will need to stray from the expected protocol. While this is done in the best interests of treatment success and preventing harm or even saving lives, these interventions present difficult challenges to consent and control. In the scenario of childbirth, these concerns are especially fraught: Doctors who ignore consent are traumatizing women during childbirth
The moral and ethical questions posed in the evolving practice of medicine are and will continue to be the subject of frequent popular debate. Medical care providers confront these issues in their work and standards for and expectations of patient care are impacted by decision-making on these bioethical dilemmas.
Artificial intelligence (AI) describes the cognitive function of machines through technology such as algorithms or other machine learning mechanisms. The very definition of AI places technological devices with this “artificial” knowledge in comparison to and opposition with humans possessing “natural” knowledge. This discipline within technology has been around for more than sixty years and in recent years, is gaining consistent enough momentum that many of its once outlandish ambitions – such as self-driving cars, for example – are current or imminent reality. As computing power advances exponentially and uses for and types of data are ever-growing, AI is becoming ubiquitous in the news of the newest and emerging technological innovations.
As AI sustains and draws on its now considerable basis of achievements to make even more advancements in research and development across many business sectors, ethical and existential dilemmas related to it become more prevalent as well. Returning to that initial dichotomy between artificial or machine intelligence and natural or human intelligence, the design ethics and morality of bestowing human-like thinking ability on devices and networks raise many philosophical questions. Certain uses of AI, such as for autonomous weapons, could even pose safety risks to humans if not developed and directed thoughtfully.
These questions can go on and on; practical ethics represents the attempt to navigate the broad social context of the workplace by reconciling professional rules with moral expectations and norms. This, again, is highly pertinent to a corporate compliance program, which seeks to encourage an business culture that respects legality, approaches business competitively yet thoughtfully, and also sets standards for employee and organizational integrity. It is imperative for compliance professionals to understand practical ethics and use dilemma sessions or open discussions with the businesses they advise in order to encourage a common comfort level with this sort of thinking throughout their organization.
The below TED/TEDx talks emphasize the connection between AI and human life, commonly invoking questions about bioethics, practical ethics, and morality.
Artificial intelligence: dream or nightmare? (Stefan Wess) – Stefan Wess, a computer scientist and entrepreneur, provides a helpful primer on the history and current state of artificial intelligence in the contemporary movement of machine education. Big Data, the Internet of Things, machine learning, speech recognition – all these technologies and AI-related topics are already part of daily life. But as this continues to develop, how will organizations and individuals interact with the technology? How should it best be controlled and is it even possible to do so? The many risk implications of AI must be considered as more advanced creations become stronger and closer to reality every day.
Can we build AI without losing control over it? (Sam Harris) – Neuroscientist and philosopher Sam Harris is well-known for his commentaries on the interaction of science, morality, and society. Advanced AI is no longer just theoretical stuff of science fiction and the very distant future. Superintelligent AI – completely autonomous, superhuman machines, devices, and networks – is very close to reality. Technologists, the organizations in which they work, and the communities for which they create must all be conscientious about the development of these technologies and the assessment of the risks they could pose. Contending with the potential problems that stem from creating this very advanced AI needs to be done now, in anticipation of the technology, not later – when it may no longer be possible to control what has been designed and brought to “life.” Planning, careful control frameworks, and regulatory supervision that balances openly encouraging innovation with soberly considering safety and risk consequences are all necessary to conscientiously embark upon these amazing technological endeavors.
What happens when our computers get smarter than we are? (Nick Bostrom) – In the same vein as the previous talk, one of the consequences of extremely “smart” artificial intelligence is that machine learning could be just as smart as a human being’s knowledge – and then, of course, eventually overtake humans in intelligence. This is alarming because it suggests the potential that humans could introduce their own subservience or obsolescence via machines created to make machines smarter. Again, all participants in developing this technology, including the consumers to whom it is ultimately directed, need to consider their intentions in bestowing machines with thought and balance the various risks carefully. With the ability for independent thought may also come the capacity for judgment. Humans must make an effort to ensure the values of these smart machines are consistent with those of humanity, in order to safeguard the relevance and survival of human knowledge itself for the future.
The wonderful and terrifying implications of computers that can learn (Jeremy Howard) – The concept of deep learning enables humans to teach computers how to learn. Through this technique, computers can transform into vast stores of self-generating knowledge. Many people will likely be very surprised to learn how far along this technology is, empowering machines with abilities and knowledge that some might think is still within the realm of fantasy. Productivity gains in application of machine learning have the potential to be enormous as computers can be trained to invent, identify, and diagnose. Computers can learn through algorithms and their own compounding teaching to do so many tasks that will free humans to test the limits of current inventions and to extend human problem-solving far beyond where it already reaches. This is certain to change the face of human employment – already bots and androids are being used for assisting tasks in diverse fields from human resources recruiting to nursing patient care. Again, the extension of these technologies must be carefully cultivated in order to neutralize the existential threats to human society and life that may be posed by unchecked autonomy of machines and artificial learning. The time to do this is now, as soon as possible – not once the machines already have these advanced capabilities with all the attendant risks.
What will future jobs look like? (Andrew McAfee) – Picking up on the theme of the changing nature of human employment as machines get smarter, Andrew McAfee draws on his academic and intellectual background as an economist to unpack what the impact on the labor market might be. The fear, of course, is that extremely human-like androids will take over the human workforce with their advanced machine intelligence, making humans mostly irrelevant and out of work. The more interesting discussion, however, is not whether androids will take away work from humans but how they may change the kinds of jobs that humans do. Considering and preparing for this reality, and educating both humans and machines accordingly, is imperative to do now.
Check back here in the future for continuing commentary on AI and its impact on human life and society, including technology and the ethics of knowledge acquisition, as well as more insights on specific AI innovations such as self-driving cars and machine learning.
Many of the contemporary challenges to the meaning of human life and the responsibility of organizations, individuals, regulators, and even governments to contend with them on a legal or regulatory level come from technology. Indeed, bioethics and design ethics are rich with ethical dilemmas caused by advancements of sophisticated technologies such as artificial intelligence and its many applications. However, there is one philosophical area that is in tension with societal existential constructs and is as old as life itself – aging and death.
The ethical dilemmas stemming from the legal and moral responsibilities humans have to themselves and each other as the end of life approaches are contentious and among the most difficult possible. These dilemmas go to the core of society’s moral ideas about the value of life, the extension of human rights throughout physical or mental incapacity due to age, and the treatment of patients and their bodies through and beyond death.
Legal guardians, funeral homes, hospitals, and other individuals and organizations working in and making profits from business related to aging and dying – encompassing legitimate activities as well as illicit ones – all have various duties to their clients and are subject to societal and legal expectations and norms. However, inspection and enforcement efforts are often uneven and struggle to keep pace with the challenges posed by abusive practices or organizational misconduct. Threats to the rights of individuals and the dignity and proper treatment – or at least clear and honest disclosures – that are expected by patients and their families, must be the focus of future regulatory scrutiny and improvement.
Overreaching paternalism in guardianship of senior citizens is a highly disturbing trend which has been enforced by the courts in some jurisdictions. Legal guardians pay themselves from their wards’ estates; in some cases they have hundreds or even thousands of clients and force out family members or friends so that they can exert their control and get paid for it. Of course, this is a necessary system for the care of vulnerable senior citizens who need help administering their affairs. However, it is also ripe for misuse by opportunistic individuals, to the great detriment of the seniors they take on as wards and their loved ones. The financial and social abuses that can occur in these cases are frightening and appalling. Legal guidelines and supervisory scrutiny of these guardians should be standardized across jurisdictions to avoid undue harm to any population and to balance the commercial caretaking aspects of the activity with the rights and dignity of the individuals concerned: How the elderly lose their rights
Funeral home regulation and inspection is currently a patchwork system at best. Gross abuses and lack of internal controls have been the subject of a number of recent investigatory reports. Employee misconduct or insufficient internal policies and procedures at an operation like a funeral home has obviously devastating potential to cause harm to families of departed individuals at a vulnerable and painful time in their lives. Following the loss of a loved one the thought that the personnel of the funeral home trusted with their body might store the remains improperly or misuse their organs and parts is a concept that is hard to even conceive. However, due to insufficient supervision and inconsistent regulatory and investigative practices, these terrible scenarios play out all too often. A coherent and cohesive regulatory framework with the strength to punish misconduct and enforce expectations of operating standards must be implemented: Gruesome Discoveries at Funeral Homes Put Spotlight on Spotty Regulations
On a related note, the dark reality of the organ trade has been the subject of a number of recent investigatory reports as well. Far from just urban legends about crimes that can take place in far-off lands, body brokers are very real and operating in the United States. While many of them do conduct legitimate business for scientific or medical purposes, others trade illicitly or take advantage of individuals who unknowingly give their body parts upon death or those of their loved ones to be later sold for profit by brokers. Fraud and misrepresentation in this industry violates the dying wishes of individuals or the difficult decisions made by families. The ease with which these illicit transactions are conducted is shocking, with human limbs or organs being bought and sold like spare car parts by some individuals. Like funeral homes, an overarching regulatory system needs to be put in place to monitor and inspect these businesses and implement enforcement actions when necessary: The Body Trade
Turning away from illicit or abusive activities to technological advancements that touch upon aging and death, the reach of artificial intelligence has begun to stretch into this area as well. Robots and robotic devices are no longer the figment of the imagination of a distant future. Many organizations are beginning to utilize them in rudimentary form for a variety of assistant-level activities and are trying to develop the AI technology to use it even more in the future. This extends to patient care as well; hospitals and nursing homes are now exploring using robots to assist nurses in treating patients. Machine learning may eventually be able to automate many aspects of basic care, removing human error and relieving non-robotic nurses to focus on more complex or individually-tailored care. This could be a great efficiency for hospital staffing in the future, but it remains to be seen how non-human interaction in the patient care arena will impact the aging experience. Compassion and humility are often of great mental importance when contending with the forces of aging and illness. A mix of human and robotic care of patients will need to be carefully devised to ensure that these needs are met: Hospitals Utilize Artificial Intelligence to Treat Patients
Life extension has been a romantic subject of philosophical and scientific desire for millennia. For as long as people have been alive, they have tried to figure out ways to prolong or prevent dying, sometimes delving deeply into the mysterious and esoteric. Current quests in this area are focused on high-tech solutions. Silicon Valley has turned its most sophisticated efforts toward life extension in seeking to “solve for death.” At the very least, these attempts may derive a technology that greatly impacts aging or pushes human life expectancies far beyond the current normal range. Within a generation this may the force of great societal change that will redefine the needs of aging populations that live for longer and continue the quest to avoid death completely: Seeking eternal life, Silicon Valley is solving for death
As demonstrated by the foregoing stories, improper practices and abuses of power, as well as technological advancements, pose risks to the nature of aging and death as it is currently defined within society. Supervisory frameworks must be developed and strengthened to protect the most vulnerable of individuals and ensure that they and their families are not treated unjustly. Risk assessments and coherent, holistic regulatory guidance should be in place to ensure that these protections are upheld.
The study of bioethics is rich and varied, always growing in diversity as emerging technologies advance. Bioethical issues have their root in decision-making about research methodology, where academics struggled to define propriety in humans’ exploitation of the natural world – plants and animals – to further science for their own benefits. Bioethics maintains this same ethos today, centered on the link between human interests in and relationship to the sciences, notably including biology and medicine. The inquiries of bioethics extend to a huge swath of topics in within health and human sciences, reflecting the deep reach technological innovations have into everyone’s lives.
First, a word on the relationship between science and morality. In Science can answer moral questions, Sam Harris suggests that the values humans rely upon to define their ethical obligations and moral choices can be seen as facts, which are the foundation of science:
Harris is a neuroscientist and philosopher who seeks to define the way that ideas about human life are shaped by the physical world in which people live. People often presume that science cannot answer the existential questions humans consider most compelling, like – what is the meaning or purpose of life? This modern world is continually impacted by technological change, but does science just provoke moral issues, or can it indeed be a force for addressing or solving them? Science is fact-driven and so too can be people’s practical assessments about right and wrong in real life. Therefore science can and should be an authority in the domain of objective fact rather, than only basing these considerations solely on non-concrete intuitions or opinions.
Building upon this presumption that science and ethics do indeed have a powerful mutual dependency, bioethics asks many moral and existential questions germane to this relationship. Animal rights, gene therapy, patient care, bio-engineering, and research methodology are just a few examples of areas where bioethical issues and debates commonly arise. The below TED/TEDx talks are a sampling of how scientists, technologists, and academics confront these challenges in their work and expect that the relationship that science and technology have with law and philosophy will continue to impact human life and society.
It’s time to re-evaluate our relationship with animals (Lesli Bisgould) – Human relationships with animals are more morally and legally complicated than many people might realize. Living with companion animals is very common and most people would say that they have compassion for animals and feel they should be treated with respect and dignity. However, humans draw unconscious lines between animals they feel are household pets, such as cats or dogs; captive animals they may think exist for educational or entertainment purposes, like whales and dolphins; livestock animals that are part of the industrial food manufacturing supply chain, like cows and chickens; and wild animals that are hunted or poached, like elephants and lions. Why do we make these distinctions and do they have some objective basis in a moral universe? What is the responsibility and response of the law?
Gene Therapy – The time is now (Nick Leschly) – Gene therapy could enable the repair of diseased or damaged cells. With applications from this technology, doctors could cure illnesses and fix injuries for good instead of requiring a lifetime of preventive and prescriptive treatment. This is an advancement that could change medicine forever. However, major funding has historically been hard to attract for research and development in gene therapy because of ethical and religious uncertainties, not to mention the resistance of some individuals and institutions within the traditional medicine establishment. Moral fear, some concrete and others more esoteric, about the dark side of where this technology could take society, even if scientists enter with the best intentions to control against that, have been a financial and ideological barrier to progress.
Transparency, Compassion, and Truth in Medical Errors (Leilani Schweitzer) – The Alexander Pope proverb goes “To err is human, to forgive, divine” – but what about when the human error results in the death of a loved one? How does one forgive when the mistake is that of a professional – such as a doctor? The legal tort system and medical malpractice insurance certainly do not inspire a reaction of kindness from the survivors. However, perhaps truth is the essential element in handling a tragic event such as a medical mistake that leads to catastrophic injury or death. Truth in medicine is important when the mistake occurs, in the form of transparency, accountability, and honest communication. Truth is also important in recovery by the survivors after the mistakes – remedial care, openness, and radical candor that can lead to emotional healing and inspire advocacy. Admitting and facing mistakes is a powerful act of integrity that can never be supplanted by the legal and administrative system in defining patient care responsibilities.
It’s time to question bio-engineering (Paul Root Wolpe) – As this blog often espouses, the best time to address moral or integrity questions and consider implementing a code of ethics that will be sustainable for the future, is universal: as soon as possible. There’s no time too soon to think about the foundations of integrity in any area of society, especially when it comes to science and developing technology. In the field of bio-engineering, technology has already advanced quite far to do things like selective or hybrid breeding of animals, modification of food products, and the creation and manipulation of artificial cells. Regulation has become controversial as an obstacle to advancement. The presumption goes that making rules or laws that cover the scope of people’s work in a scientific area will stifle their innovation. This does not have to be true if a moral code is built into the knowledge acquisition process from the beginning. Progress and ethics are not naturally at odds and do not have to be positioned as antagonistic to each other in pursuit of scientific discovery, but to let either take dominance over the other is short-sighted and dangerous.
Trust in research – the ethics of knowledge production (Garry Gray) – The work of research scientists weighs heavily on consumer and public safety. Most of the goods people use on an everyday basis have been the product of a prolonged research and development process, which laypeople assume has been conducted with accuracy as the principle interest and free of biases. However, this is far from true in practice. Corporate funding and institutional agendas all have great influence on scientific research. People are well aware of the possible danger of these influences, which are nevertheless necessary for work to be done, but the deeper problem is that the researchers themselves may believe they are able to naturally maintain independence as a function of their expertise. In reality, no conflict of interest risk management mechanism can be effective if it only exists within a person’s head. Sensitively and sensibly managing these conflicts and the biases they create is very important work that must be responsively and proactively done to support research scientists in their endeavors.
Check back in the coming weeks for further posts on bioethics, including a look at current trends in corporate compliance issues arising from bioethical debates in the scientific research and medical fields, further discussion of bioethics as it relates to artificial intelligence, and insights on the larger interrelationship between technology and ethics of knowledge acquisition, engineering, and design.
An ethical dilemma is a problem in decision-making between two or more possible choices which involve conflicting interests and challenging possible consequences. Often this can be understood as a scenario in which making one decision has an impact on the interests involved in the other decision(s) not made. Choosing to not make a decision is also, in its own right, a choice which implies these consequential dynamics. The below TED/TEDx talks are a sampling of some different dilemmas encountered and the ways that the speakers have thought about and attempted to resolve them.
The ethical dilemma of designer babies (Paul Knoepfler) – Biotechnology which was once the stuff of science fiction is now becoming an everyday reality, or at least a possibility that is easy to imagine for the not-so-distant future. For many years now there have been ethical questions about the use of gene editing technology in human embryos. This could allow scientists to mitigate the risk of certain auto-immune or congenital diseases, which would be a marvel of modern medicine. However, it could also make the way for individuals to use the technology to also alter physical appearance and pre-determine many of a person’s traits, perhaps also eventually personality characteristics. What answers does bioethics have for this dilemma? Is it worth the risks, too dangerous to justify the benefits, or somewhere in between – a technology that should be progressively and thoughtfully developed with both those risks and those benefits in careful balance?
Can we engineer the end of ageing? (Daisy Robinton) – While the prior talk considers the beginning of life, there are also bioethical considerations to scientific advancements made concerning the end of life also. Just as there can be cellular interventions on the biological makeup of embryos, therapeutic mechanisms of stem cell identity may already be useful in increasing longevity and health, such as by reversing the growth of cancerous cells or addressing other developmental diseases. However, what about the possibly to “edit” one’s DNA not for survival or to cure a sickness, but to improve capabilities or change aesthetic qualities? If some physiological differences are editable at the cellular then is it ethical to do so?
The Social Dilemma of Driverless Cars (Iyad Rahwan) – Self-driving cars have been in the news a lot recently as leading organizations such as Ford, General Motors, Tesla, and even Samsung are making major investments in developing field. In the US, the federal government has indicated that it prefers to let technological innovation take precedence over anticipatory regulation, perhaps taking lessons learned from the initial failure of the electric car industry in the 1990s and early 2000s. The artificial intelligence of self-driving cars is ethically challenging, in consideration that these driverless vehicles will share the road with pedestrians and conventional vehicles. Will they be safer than cars with human drivers, or do they bring up all kinds of new safety and privacy concerns?
Machiavelli’s Dilemma (Matt Kohut) – More to the point of typical everyday interactions than the abstractions of the limits of medicine and technology, what about character judgments? The classic question remains – do we want to be loved or feared? Liked or respected? Most people of course would say some combination of both, but in first impressions or in difficult leadership situations, sometimes the choice to be one at the expense of the other is unavoidable.
The paradox of choice (Barry Schwartz) – The thing of all these different dilemmas have in common is, of course, choices that individuals, organizations, and sometimes society as a whole must make. Facing the responsibility of making a choice indicates that there is freedom of choice in the first place. The privilege of decision-making can also be a burden. One must be able to decide in the beginning in order to feel some sense of personal dissatisfaction or insufficiency provoked by the idea that other choices, and other outcomes could have been possible.
As the above demonstrates, there are many diverse examples of ethical dilemmas which come from all areas of business and life. This effectively points out how ubiquitous these challenging situations are. From simple, everyday interactions to matters of life and death, ethical dilemmas present challenging, compelling moral questions.