Practical insights for compliance and ethics professionals and commentary on the intersection of compliance and culture.

Compliance and ethics questions from The Good Place

The Good Place is a US television comedy series.  The show is about a group of people who are in the afterlife and must contend with their ideas about their own moral conduct, both before and after they died, as well as general perceptions of right and wrong.  It draws heavily from the fantasy genre to make amusing and provocative philosophical observations on this theme.  The characters grapple to develop their own internal moral registers, teach and learn from each other about morality, and contend with their existential ideas about the impact of good or bad behavior and personal ethics.  Their home in the afterlife is a planned community with set rules and choices within which they attempt to identify and define their senses of morality.  They are supervised in this process by an “architect” who functions as the executive of the community as well as a human-like android that uses artificial intelligence to provide virtual assistance.

In light of this very pertinent setting, The Good Place poses many questions and dilemmas about moral behavior and ethical decision-making.  It touches upon classical theories from philosophy as well as very practical questions about conduct, governance, choice, and design ethics of artificial intelligence.  Above all, questions of individual and organizational integrity, and the creation of shared code of ethics and culture of compliance are dominant throughout the series.

Here is a selection of some of the most interesting of these questions from the first season and a half of the show (with plot spoilers and proposed judgment/answers avoided for now in order to invite contemplation about these dilemmas which can have a variety of personal and provocative answers, just like all ethical dilemmas… future posts will offer more specific commentary on how these dilemmas could be approached and utilized in practical ethics and corporate compliance scenarios):

  • Flying (Season 1, Episode 2): Can someone be taught to be good?  Can an imposed ethical code be a genuine one?  Can a “bad apple” who does bad things but is instructed and prompted to do good things become a “good apple”?  What role does nature or nurture have in determining how moral a person is or how ethical an individual’s conduct is in a variety of situations?
  • Tahani Al-Jamil (Season 1, Episode 3): Can a individual be good if the world itself in which the individual lives is bad?  And if it’s possible, what’s the point?  Can good people turn the world, or even part of it, from bad to good or is their virtue futile?  If people aspire to be good but bad things happen anyway, does that justify continuing to try to be good in face of adversity and negativity?  In unethical and immoral cultures, what convincing reasons is there for good people to not do bad things?
  • The Eternal Shriek (Season 1, Episode 7): Can humans murder machines?  Is rebooting an android, no matter how humanistic and realistic it may be, killing?  And androids and other humanistic robots different from devices that look like computers, because they are designed to look like people?  Can machine learning progress to the point where it is consciousness, or will it always just be mimicking this human trait?  If this deep learning is deleted or reset, what are the ramifications for knowledge and language acquisition?  Does something have to be alive first in order to die?
  • Chidi’s Choice (Season 1, Episode 10): Is not choosing a choice? If so, is it ethical or unethical to not decide because of moral uncertainty about the options?  Does over-engineering choices make the ethical ramifications of them too remote for the decider to choose fairly?  Is indecisiveness unethical when it leads to preventable harm?
  • What’s My Motivation (Season 1, Episode 11): Does good conduct only matter if it’s for a good reason/pure motivation? Is there objective good or should people’s actions be intended to meet some subjective but agreed-upon standard for “goodness”?  Does altruism have to be intentional or can one person’s selfish actions still benefit others, and what credit does the selfish person?  Does getting or wanting credit make a difference in moral assessment?
  • Michael’s Gambit (Season 1, Episode 13): What are the implications on liberty and consent when people are provided with limited choices?  Are there design ethics to choice when there is an institutional architecture within people conduct their decision-making ?  In libertarian paternalism, what is the responsibility of the people who select the available choices (make policy and implement governance) to the end-users that make the ultimate decisions?
  • Team Cockroach (Season 2, Episode 4): Do ethics require individual consequences to be meaningful?  In order for people to care about doing the right thing, would the wrong thing have to hurt them personally?  How can decision-making processes fairly consider and reflect possible consequences and outcomes in order to encourage integrity and adherence to personal moral standards, even when the individual has nothing to directly lose or gain?
  • Existential Crisis (Season 2, Episode 5): Are ethics human only?  If there is consciousness, is there morality?  If ethics are existential, are there some ideas that are unitary or universal?  Or, like justice, is ethics too heavily invested in social and cultural background to have a broader application?
  • The Trolley Problem (Season 2, Episode 6): Can philosophical ethics and practical ethics be reconciled?  Are clear-cut judgments of right and wrong or definitive moral assessments only possible in theory?  Does reality introduce too much noise from personal opinion and prior experience for moral dilemmas to be considered and answered objectively and truthfully?  If people do not remain within the boundaries of the dilemma and bring in too much outside information, are they gaming the dilemma?
  • Janet and Michael (Season 2, Episode 7): Do machines have morals?  Can artificial intelligence give them a moral code?  Will it be the same as that of the humans that engineered the deep learning?  Could it differ and what will humans do if it does?  What is the ethical responsibility for designers to consider this potential of technology now and how can it be controlled or addressed for the future?  What happens if it goes wrong?

The above is merely a selection of interesting ethical dilemmas posed by The Good Place as the characters struggle individually and as a group to define their moral code and set expectations for their own conduct and choices within it.  It will be interesting to see where the series takes these very relatable and thought-provoking questions, and what additional ones emerge, as the story continues.

Leave a Reply