Categories
Compliance and ethics business case studies

Inexperienced CEOs and immature compliance cultures

It is never too early, or too burdensome, to create a fundamental business compliance program.  Small businesses, new businesses, and experimental businesses can all benefit tremendously from the foundation and organizational structure that a basic risk control framework can bring.  A disruptive or innovative company does not have to eschew everything about traditional business in favor of transformative and novel ways of working.  It is fair that some strategies or philosophies may be seen as staid or unlikely to keep pace with the competitive and development pressures these businesses face.  However, the common sense responsibility (values-based) and implementation of legal and regulatory guidelines (rules-based) impact of a corporate compliance culture encourages and supports business sustainability.

All too often, however, start-up companies lack this structural backbone.  They do not have adequate policies and procedures in place, are unable to cope with the employee and supervisory demands that emerge in their workplaces and marketplaces, and grow into business practices without the controls framework and governance, risk management, and compliance structures that they find they need.  Most concerningly of all, with their attention span devoted to survival and then growth, these companies find themselves without genuine and integrity-supporting corporate cultures, and attempts to impose them over the top of the existing environment are artificial and difficult.

This challenge becomes only stronger when the company without a confident hold on compliance and ethics building blocks is dominated by a founder or CEO who is,  him or herself, on unproven ground.  Inexperienced CEOs may have amazing, ground-breaking ideas and new ways to develop and market them, but if they are not effective as either leaders or managers, then they may fall into leaning on personality ethic.  These are the leaders whose individual credibility and identities dominate every aspect of their business, to investors, colleagues, employees, customers, and the public in general.

Without a prevailing independent corporate culture that relies on a collective character ethic and mature organizational integrity, these situations do not make for long-term viable business strategies.  Instead, these companies all too often slip into misconduct, fraudulent practices, and an overall culture of non-compliance.   Risk from regulatory non-adherence, corner-cutting in basic business operational requirements, and other malfeasance is not controlled by the appropriate and thoughtful defense strategies that a compliance program could create, implement, and monitor.

There are a number of examples of companies which grew impressively and then suffered due to insufficient leadership or immature management.  In each case these businesses are known for a prominent figurehead whose personality attracted the press and the public and whose ideas were exciting to the markets and enticing to investors.  However, legal and regulatory inadequacies of these businesses and their cultures have hobbled these companies’ lasting ascent:

  • Apple – Steve Jobs – The ouster of Steve Jobs at the company he created, Apple, led by the mentor he brought on to guide him to the next level as CEO, John Sculley, is the stuff of Silicon Valley legend. While this often seen as an epic example of corporate disloyalty and executive board politics, the more powerful lesson here is for business values and sustainable practices.  At the time Jobs was fired from his own company, emotional intelligence, inner success, and business mission statements were not part of the popular parlance.  Perhaps if they had been, Sculley and Jobs wouldn’t have found themselves permanently estranged: Former Apple CEO John Sculley admits Steve Jobs never forgave him, and he never repaired their friendship, before Jobs died
  • Nasty Gal – Sophia Amoruso: The retail entrepreneur and self-proclaimed “girl boss” may beg to differ with her inclusion in this list, but Sophia Amoruso is a classic example of personality ethic over character ethic.  Amoruso developed a company in her own image, and then turned her image into a personal brand that both transcended and hindered Nasty Gal.  Amoruso is a polarizing personality, and the whimsical approach she embraced in her life may be great for a career as a motivational speaker and writer where people who need inspiration can take a few tips from her for self-development.  However, a business that succeeded due to Amoruso’s successes was also vulnerable to fail due to her failures, without its own corporate identity and developed business culture, and this led to the ultimate undoing of her brand (to be rescued by a larger corporate entity, away from Amoruso’s control), rather than its longevity:  What Comes After Scandal and Scathing Reviews? Sophia Amoruso Is Finding Out
  • Uber – Travis Kalanick – Travis Kalanick’s tenure at Uber started in idolatry around the industry, when everyone with an idea for app wanted to imitate and one-up his path to success. Starting in 2016, however, cracks in the pedestal Kalanick was up on began to show.  Once his public relations woes began, they never ended, even after he was ousted as CEO of Uber for countless issues with the company’s corporate culture for employees, regulatory adherence in critical markets, and legal risks.  All of these problems came out in a powerful confluence at least in part because Uber’s quick rise to the top was enabled by non-compliance via omission at its origins:  Uber Scandal Timeline: Why Did CEO Travis Kalanick Resign? 
  • Thinx – Miki Agrawal – Check out this post for a comprehensive take on the inappropriate conduct modelling of Miki Agrawal and the destructive impact it had on corporate culture at her innovative female hygiene apparel company, Thinx.
  • Theranos – Elizabeth Holmes – Check out this post for a look at the cult of personality created by Elizabeth Holmes at the blood-testing device company Theranos, and the fraudulent business practices and misrepresentations that were enabled by it.
  • Tinder – Sean Rad – Check out this post for a detailed discussion of the emotional un-intelligence that dominated the start-up culture of Tinder due to the influence of its CEO, Sean Rad, and the absence of a burgeoning compliance program to match the booming dating app business.

For an interesting counterpoint, check out the post on Eric Schmidt at Google:  Google is not without its corporate culture challenges, particularly as shown in 2017 by the loud public discussion over diversity and engagement in its ranks and the company’s clumsy and performative handling of this bad publicity.  However, Google has often portrayed Eric Schmidt at the grown-up in the room, not to prevent or obstruct innovation and success, but to steward and support these efforts while still taking care of the underlying business operations must-haves.  Check out this Wired article on how this management structure enabled Google’s development into one of the major digital companies in the world:  At Google, Eric Schmidt Wrote the Book on Adult Supervision

For similar discussions to this one, check out this post on essential compliance tips for small businesses, and this post on challenges faced by start-ups in Silicon Valley and other disruptive industries.

Categories
Compliance and ethics business case studies

Compliance challenges for start-ups in disruptive industries

In today’s fast-paced business world of innovation and advanced technologies, every company seems to offer the next in-demand disruption. Ever since the days of the dot-com boom and bust in the late 1990s and early 2000s, in the infancy of e-commerce and internet-based or networked products and services, companies have been striving to identify revolutionary items and ideas to market to consumers eagerly awaiting the next life-changing thing to buy. Start-ups in Silicon Valley and entrepreneurial communities all over the world want to develop the next iPhone that will transform every aspect of modern human life. Companies that provide services instead of making products all want to be the next Airbnb, the Uber of their industries, and so on.

But are those companies, and those goals of disruption for the sake of itself, anything to which companies should aspire? Companies in all business sectors are trying to emulate technology companies, and they may not be the best role models in terms of regulatory compliance, risk control frameworks, and business integrity fundamentals. Disruption and sustainability aren’t necessarily mutually exclusive, but many of the companies that were visible pioneers in the current wave of technological innovation and development cut ethical or foundational corners to focus on growth, sales, and branding. Companies in the new generation which seek to copy their success and single-minded commercial focus will run into legal and supervisory obstacles sooner rather than later, now that their predecessors have overstayed the honeymoon period of lax regulatory attention and are running afoul of legal, tax, and compliance concerns all over the world.

The start-up community’s response to public exposure of fraudulent or insufficient business practices – such as companies buying their own products to falsify sales success for partners and investors, or violating straightforward business operations rules like participating in mandatory state insurance programs to maintain company licensure – is to go on the defensive and blame the media. Worse yet, they want to claim stand-out corporate misconduct from their start-up peers are the exception, not the rule, and distance themselves from it, without doing any self-examination or risk assessment to feed-forward into their own continuous improvement.

However, the venture capital firms that are keeping these start-up companies striving toward their disruptive ambitions have a fiduciary duty to their funders to contain reputational risk that could stem from these companies’ public relations and legal problems. The “bad apples” theory cannot win the day in identifying why so much goes so wrong at so many start-ups that were once ambitious and backed by prestigious funders and now have failed, and are being sued by fraud, investigated for investor abuse, accused of forgery or inappropriate accounting practices, and have otherwise missed out on reaching disruption and instead fallen into disrepute.

In any business dominated by private companies getting rich quick, delving into areas which are within loopholes or blind-spots to current legal and regulatory enforcement agendas, transparency is the victim to innovation and doing things the right way, with respect to ethical concerns or compliance requirements that could pop up further down the road from the beginning, is subverted in favor of making money, attracting more investors, and bringing a product or service to market first and with the most attention. “Fake it till you make it” is a toxic approach to management and is no kind of leadership whatsoever. Ignoring legal and regulatory requirements cannot go on forever, as the many bans and service stoppages Uber has experienced in the last year well show. Companies may be able to grow quickly this way, but they cannot keep their business running or have much hope of holding onto their ill-gotten gains unless they tread carefully with regulators and supervisors from the start.

The cultural forces at work here are strong, and disconcerting. Founders with no experience as CEOs and even less experience as functional managers or ethical leaders are given millions of dollars by investors and pressured to be geniuses, redefine business and whatever it is they have to offer to the market in everything they ever do, and succeed at all costs. Liberties are taken, misrepresentations are made, and not every brilliant troublemaker with a crazy idea and a team of engineers turns out to be any good at actually running a legal, functioning, mature business.

The hope, supposedly, is that people will merely bend or flaunt the rules, and not break them, but who’s making the distinction? The moral hazard is great of creating an incentive for behavior that would even lead incrementally to a company that is not in simple compliance with the legal requirements for operating a business in the city, state, or country where it is located. Cautious onlookers assume that maybe if a few corners are cut at the beginning when things are small, it will all work out okay because by the time the company gets big, someone who likes paperwork or understands laws will stumble along and lend a hand. This is immature and short-sighted thinking.

Even if some philanthropic compliance officer did intervene, it would be too late to fix the cultural decay that grows at companies that do not have adequate business values and controls from the beginning. When people ask how it’s possible that business fraud and misconduct went on for years at some companies, or permeated every level of the organization seemingly without detection or interruption – this values void is the answer. To avoid a culture where cheating, misrepresenting, and making unethical decisions are all common, the foundations of the company must include cultural values where that conduct is expressly defined as unacceptable, and business governance structures to prevent, identify, and punish it when it happens.

For more on the challenges to ethical decision-making, and pitfalls for fraud and non-compliance, faced by start-ups, especially in the highly competitive advanced technology world of Silicon Valley, check out this article in Fortune from December 2016:  The Ugly Unethical Underside of Silicon Valley.

For further thoughts on the challenges that start-ups and emerging enterprises face with prioritizing compliance risk management, see this post on Tinder’s corporate culture and the role compliance can play in fostering professionalism in start-ups.  For practical tips, check out this post on compliance foundation must-haves for small businesses. And, check back next Wednesday, January 3, for a post on inexperienced (even if visionary) CEOs and the immature compliance cultures they cultivate by omission.

Categories
Administrative

Merry Christmas!

Merry Christmas from Compliance Culture!

In honor of the holiday, check out this article from Vice’s Motherboard on advanced technology efforts to engineer ideal Christmas trees to appeal to consumers:  Scientists Are Creating Christmas Trees That Don’t Shed Their Needles.

Categories
Compliance in popular culture

Compliance in The Circle

The 2017 movie The Circle, based on the 2013 novel of the same name by Dave Eggers, is about the impact of commercial technology on human life.  It poses common ethical and moral questions about privacy and security in a time of interconnected information sharing via social media and networked devices. The movie is a thriller which centers around a tech giant that offers advanced products and services that have transformed the way people do business and interact with each other by placing all interactions on various platforms and networks with ratings and sharing capabilities.

While the high-tech immersion depicted in The Circle is not yet current reality, technology is developing at a breakneck pace and social media platforms, the Internet of the Things, and services driven by algorithms and other artificial intelligence and machine learning are increasingly ubiquitous with each passing day. At its core The Circle is concerned with overreach of these technologies and the companies that develop and market them, and the ethical problems and moral challenges that can arise from human and societal interaction with them.

  • Secrecy as dishonesty – One of the central philosophical proclamations of The Circle is when the protagonist, Mae, is confronted with a legal transgression she committed and in her reckoning with her actions states, “Secrets are lies.” Mae’s central thesis is that she would not have committed her crime if someone had been watching or aware of what she was doing. Therefore, the suggestion is that secrecy is a form of dishonesty. Disclosure, on the other hand, is the ultimate truthfulness and in this perspective, is valued over privacy. Privacy enables people to lie and conceal, and therefore leads to misconduct and distrust. Individuals giving up their expectations of privacy would supposedly lead to greater overall security and trust. The tension between liberty and safety is not an unfamiliar one in society. The dilemma of which takes precedence will be an on-going and dominant moral dilemma.

 

 

  • Transparency overload – It’s easy to agree that transparency and openness encourages honesty and communication. Clear and public disclosure of organizational activities and values provide strong incentives for making the best ethical decisions and keeping integrity in mind when planning business strategy. However, the admirable mission of transparency is subject to subversion, as The Circle Claims of public transparency can be selective, creating an impression of a company that values openness and progressive values when in reality it is picking and choosing disclosures while hiding malfeasance and abuse behind this self-selected façade. Also, going too far in claiming transparency on a personal level can be too much of a good thing. As above, the tension between personal privacy and public disclosure is a delicate balance which must be worked thoughtfully.

 

 

  • Surveillance and consent – In promotion of corporate and societal values of transparency and shared disclosure, the company in The Circle introduces a service where tiny cameras are embedded everywhere out in the world. Some of the cameras are installed intentionally by users who wish to share, but others are placed in a variety of public locations without notification or permission to do so. The video streaming from the cameras are publicly available online for searching, indexing, and manipulation. While being able to see a high-definition and flexible feed of the surf at a beach is appealing for a number of reasons, cameras everywhere in public, regardless of their utility or entertainment value, can also be used by both private and public concerns to conduct surveillance. As these cameras are in some cases posted without consent or knowledge, this surveillance is vulnerable to unintended uses and can represent, as above, serious risks to personal rights and privacy expectations.

 

 

  • Cybersecurity – The company in The Circle develops, markets, and sells a technology service. Therefore the people who buy what they market are not only purchasers or customers but also users. They have heightened expectations and rights for protection by the company as such. Not only is the extent to which their data is collected by the company questionable (even when the users are intentionally sharing it in an excessive or imprudent manner), but the company also is obligated to store it, and may violate individuals’ rights by viewing it, accessing it, analyzing it, or not keeping it safe from intrusions by and alterations, deletions, or other misuses of, its employees or third parties. Cybersecurity risk management is a huge challenge for a company such as this one, which is clearly putting its commercial and societal ambitions over any fundamental value of information security that is discernible.

 

 

  • Unethical decision-making – While the titular company in The Circle repeatedly suggests that transparency can be a force for good and should be leveraged for this purpose by the widespread use of what boils down to be surveillance technology, reality of how humans use this technology show that its use and influence is not straightforwardly positive at all. Quite to the contrary, on many occasions in the movie disclosures and discoveries due to the technology are harmful to individuals and relationships. Despite the desire to incentivize honesty and normalize total disclosure, people end up getting hurt, both because of their own overzealous adoption of the technology and of the actions of others. In the most dramatic example of this, a person dies due to a series of events kicked off by a crowd-sourced surveillance operation performed at a company demonstration of their new service. Unethical decision-making, both in questionable design ethics by the organization and in immoral behavior by user-individuals, directly causes these tragic and disturbing events.

 

 

There are many ethical and moral dilemmas posed by availability of advanced technology which can encroach about privacy, security, and consent of individuals. Transparency, surveillance, and risks to information security and from cybersecurity are all common themes of The Circle as well.

Categories
Trends in business compliance

Round-up on the humanity of artificial intelligence

Human fascination in, and even obsession with, robots is nothing new. For many years people have imagined distant versions of the future where human interaction with different types of robots, androids, or other robotics products was a routine part of life both at work and at home. Sometimes these forward-looking scenarios focus on convenience, service, and speed. Much more often, however, when asked to contemplate a future with ubiquitous artificial intelligence (AI) technology imbedded alongside humans, thoughts stray into possible troubling or dark impacts on society. People worry about loss of humanity as technology predominates, or the possibility that robots could be misused or even gain sentience and have intentions to work against or harm humans.

In the past these scenarios, both of the positive advancement of society and of the potential for isolating, dangerous dystopia, were mostly relegated to science fiction books, Hollywood blockbuster movies, or what were seen as overactive imaginations or paranoid opinions of luddites. Now, however, the news is full every day of developments in AI technology that bring the once-imaginary potential of robots ever closer to present reality.

As technologists and business organizations consider the utility of advancement in AI, ethicists and corporate compliance programs must also consider the risk management issues that come along with robots and robotics. Technology which will have such a broad and deep impact on human life must be anticipated with thoughtful planning for the compliance risks which can arise. In particular the potential for sharing human traits with AI technology or imbedding AI technology in place of human judgment present provocative challenges.

  • Anticipating increased interactions with androids – robots that look like humans and can speak, walk, and otherwise “act” like humans would – leads to the logical question of will humans have relationships with androids and vice versa? This would be not just transactional interactions like giving and receiving directions, or speaking back and forth on a script written to take advantage of or increase machine learning within the android. Rather, this could be intimate, emotionally-significant exchanges that build real connections. How can this be when only one side of the equation – the human – is assumed to be able to feel and think freely? While technical production of robots that appear credibly human-like is still beyond the reach of current science, and giving them a compelling human presence that could fool or attract a human is even further away, work on these tasks is well underway and it is not unreasonable to consider possible consequences of these developments. Will humans feel empathy and other emotions for androids? Can people ever trust robots that seem to be, but aren’t, people? Will the lines between “us” and “them” blur? The burgeoning field of human-robot interaction research seeks to answer these questions and develop technology which responds to and considers these tensions.  Love in the Time of Robots 
  • On a similar note, when could machine learning become machine consciousness? Humans have embraced the usefulness of AI technologies which become smarter and more effective over time after they are exposed to more knowledge and experience. This is a great argument for deploying technology to support and improve efficiency and productivity. Everyone wants computers, networked devices, and other products that use advanced technology to work more accurately and easily. Machine consciousness, however, suggests independent sentience or judgment abilities, the potential of which unsettle humans. From a compliance and ethics perspective there is an extra curiosity inherent in this – what will be the morality of these machines if they achieve consciousness? Will they have a reliable code of ethics from which they do not stray and which comports with human societal expectations? Will they struggle with ethical decision-making and frameworks like humans do? Or will human and human-like practical ethics diverge completely?  Can Robots be Conscious? 
  • In 2016, David Hanson of Hanson Robotics created a humanoid robot named Sophia. At his prompting during a live demonstration at the SXSW festival, Sophia answered his question “Do you want to destroy humans?… Please say ‘no’” by saying, “OK. I will destroy humans.” Despite this somewhat alarming declaration, during the demonstration Sophia also said that she was essentially an input-output system, and therefore would treat humans the way humans treated her. The intended purpose of Sophia and future robots like her is to provide assistance in patient care at assisted living facilities and in visitor services at parks and events. In October 2017, Saudi Arabia recognized the potential of the AI technology which makes Sophia possible by granting her citizenship ahead of its Future Investment Initiative event. A robot that once said it would ‘destroy humans’ just became a robot citizen in Saudi Arabia
  • The development of humanoid robots will certainly become a bioethics issue in the future as the technology to take the human traits further becomes within reach. While there are so many compelling cases for how highly advanced AI could be good for the world, the risks of making them somehow too human will always be evocative and concerning to people. The gap between humans and human-like androids is called the uncanny valley, the space between organic and inorganic, natural and artificial, cognitive and learned. The suggestion that the future of human evolution could be “synthetic” – aided by or facilitated in the development androids and other robotics – presents a fascinating challenge to bioethics. Are humanoid robots objects or devices like computers or phones? It is necessary to consider the humans and androids in comparison to one other just as it is humans and animals, for example. This ethical dilemma gets to the root of what the literal meaning or definition of life is and what it takes for someone, or something, to be considered alive. Six Life-Like Robots That Prove The Future of Human Evolution is Synthetic
  • One of the potential uses of AI technology which worries people the most is in autonomous weapons. The technology in fact already exists for weapons which can be used against people without human intervention or supervision in deploying them. Militaries around the world have been quick to develop and adopt weapon technology that uses remote computing techniques to fly, drive, patrol, and track. However, this established use of this technology is either for non-weaponized purposes or, in the case of drones, deployment of weapons with a human controller. Fully automating this technology would in effect be giving AI-powered machines the decision-making ability that could lead to killing humans. Many technologists and academics are warning governments to consider preventing large-scale manufacturing of these weapons via pre-emptive treaty or other international law.  Ban on killer robots urgently needed, say scientists

As the diverse selection of stories above illustrates, the reach of robots, robotics, androids, and other developments within AI technology are certain to permeate and indeed redefine human life. This will not be in the distant or unperceived future. Rather, real impact from these advancements is even already starting to be seen, and there is only more to come. Governments, organizations, and individuals must make diligent risk assessment preparations to integrate this technology with human life in a harmonious and sustainable fashion.

Categories
Compliance in popular culture

Compliance in Black Mirror

Black Mirror is a very popular US-UK television science fiction series. It originally aired on Channel 4 in the UK and is now released and broadcasted by the subscription video streaming service Netflix. The series is anthology-style, with short seasons of stand-alone episodes which are like mini films. Most of the episodes of the series touch upon the dominance of and overreach into human life by technology, such as social media, AI, and other advanced, immersive systems and devices. The take offered is quite dramatic, often delving deeply into adverse psychological and sociological effects on modern society, taking a dark and even dystopian perspective.

While all the episodes of Black Mirror do depict a future reality, it is an immediate and accessible reality impacted by technology exceeding that which is currently possible but not so much as to be unthinkable. Indeed, the title of the show, Black Mirror, refers to current technology which is increasingly ubiquitous and addictive – television screens, computer monitors, and smartphone displays. The show both entices with the idea that many of these technological advancements could be convenient or novel or life-enhancing, while also warning that the obsessive and addictive aspects of technology could cause great harm and disruption if not developed and managed thoughtfully and carefully with the risks well in mind.

  • “The Entire History of You” (Series 1, Episode 3): In this episode, a couple struggling with mistrust and insinuations of infidelity make disastrous use of a common biometric – a “grain” implant everyone has that records everything they see, hear, and do. The recordings on the implants can be replayed via “re-dos.” This is used for surveillance purposes by security and management, as the memories can be played to an external video monitor for third parties to watch. Individuals can also watch the re-dos from their implants directly in their eyes, which allows them to repeatedly watch re-dos, often leading them to question and analyse the sincerity and credibility of people with whom they interact. People can also erase the records from their implants, altering the truthfulness of the recordings. This troubles the status of trust and honesty in society which has already in contemporary life been eroded by the influence of the internet.

 

 

 

  • “Be Right Back” (Series 2, Episode 1): In this episode, Martha is mourning her boyfriend, Ash, who died in a car accident. As she struggles to deal with his loss, her friend who has lso lost a partner recommends an online service that allows people to stay in touch with dead loved ones. The service crawls the departed person’s e-mail and social media profiles to create a virtual version of the person. After the machine learning advances enough by consuming and trying enough communications, it can also digest videos and photos by graduating from chatting via instant message to replicating the deceased’s voice and talking on the phone. At its most advanced, the service even allows a user to create an android version of the deceased that resembles him or her in every physical aspect and imitates the elements of the dead person’s personality that can be discovered by the online record. However, in all this there is no consideration given to the data privacy of the deceased person or to his or her consent to be exposed to machine learning and replicated in this manner, including even the physical android form.

 

 

  • “Nosedive” (Series 3, Episode 1): This is one of the most popular, critically-acclaimed episodes of the series, and one of the obvious reasons for this is that it focuses on social media and how it impacts friendships and interactions. The addictive aspects of social media in current times are already a hot topic in design ethics, driving people to question whether social media networks like Facebook or Twitter are good for the people who use them, and where to locate the line between entertainment and a fun way to connect and share, versus a platform with a potentially dark and abusive impact on users. In this episode, everyone is on social media and is subject to receiving ratings from virtually everyone they encounter. These ratings determine people’s standing both on social media and in the real world as well – controlling access to jobs, customer service, housing, and much more. Anxieties and aspirations about ratings drive everything people do and all the choices they make. “Addictive” has been met and surpassed, with social media having an absolutely pervasive impact in everyone’s lives.

 

 

  • “San Junipero” (Series 3, Episode 4): One of the most universally loved episodes of Black Mirror, San Junipero depicts the titular beach town which mysteriously appears to shift in time throughout the decades. Kelly and Yorkie both visit the town and have a romance. San Junipero turns out to be a simulated reality which exists only “on the cloud,” where people who are at the end of their lives or who have already died can visit to live in their prime again, forever if they so choose. In the real world, Kelly is elderly and in hospice care, while Yorkie is a comatose quadriplegic. Both eventually chose to be euthanized and uploaded to San Junipero to be together forever, after getting married first so that Kelly can give legal authorization to Yorkie to pass over. The bioethical considerations of such a reality are clear – in this society, assisted suicide is a legal normalcy, and part of patient care is planning one’s method of death and treatment path after death, which digitalization being a real option. All of the San Junipero simulations exist on huge servers, and judging by how many lights are flickering in the racks this seems to be a popular practice – but what about cybersecurity and information security of the simulations? What if the servers were hacked or damaged? This gives a new meaning to humanity and places an entirely different type of pressure on making sure that technology is used safely and the data stored on it is protected.

 

 

  • “Men Against Fire” (Series 3, Episode 5): This episode concerns the future of warfare in a post-apocalyptic world. Soldiers all have a biometric implant called MASS that augments reality, enhances their senses, and provides virtual reality experiences. One soldier’s implant begins to malfunction and he soon learns that the MASS is in fact altering his senses so that he will not see individuals he is told are enemy combatants as people. It turns out that the soldier is part of a eugenics program practicing worldwide genocide and the MASS is being used to deceive the solders and turn them into autonomous weapons who murder on command due to the augmentations and alterations to reality by the MASS. This storyline falls cannily close to many current concerns about the adoption of autonomous weapons that are not directed or monitored by humans, which are nearly within technological capability to be created and are the subject of international calls for appropriate supervision of and restraint in their development.

 

 

Black Mirror offers many interesting scenarios for analysis of and study by compliance and ethics professionals considering risk management related to the use of technology in organizations and society. As described above, surveillance, data privacy, consent, design ethics, autonomous weapons and other AI, bioethics, and cybersecurity are just a sampling of the issues invoked by episodes of the series.

Categories
Compliance in popular culture

Selected TED/TEDx talks on artificial intelligence

Artificial intelligence (AI) describes the cognitive function of machines through technology such as algorithms or other machine learning mechanisms. The very definition of AI places technological devices with this “artificial” knowledge in comparison to and opposition with humans possessing “natural” knowledge. This discipline within technology has been around for more than sixty years and in recent years, is gaining consistent enough momentum that many of its once outlandish ambitions – such as self-driving cars, for example – are current or imminent reality. As computing power advances exponentially and uses for and types of data are ever-growing, AI is becoming ubiquitous in the news of the newest and emerging technological innovations.

As AI sustains and draws on its now considerable basis of achievements to make even more advancements in research and development across many business sectors, ethical and existential dilemmas related to it become more prevalent as well. Returning to that initial dichotomy between artificial or machine intelligence and natural or human intelligence, the design ethics and morality of bestowing human-like thinking ability on devices and networks raise many philosophical questions. Certain uses of AI, such as for autonomous weapons, could even pose safety risks to humans if not developed and directed thoughtfully.

These questions can go on and on; practical ethics represents the attempt to navigate the broad social context of the workplace by reconciling professional rules with moral expectations and norms. This, again, is highly pertinent to a corporate compliance program, which seeks to encourage an business culture that respects legality, approaches business competitively yet thoughtfully, and also sets standards for employee and organizational integrity. It is imperative for compliance professionals to understand practical ethics and use dilemma sessions or open discussions with the businesses they advise in order to encourage a common comfort level with this sort of thinking throughout their organization.

The below TED/TEDx talks emphasize the connection between AI and human life, commonly invoking questions about bioethics, practical ethics, and morality.

  • Artificial intelligence: dream or nightmare? (Stefan Wess) – Stefan Wess, a computer scientist and entrepreneur, provides a helpful primer on the history and current state of artificial intelligence in the contemporary movement of machine education. Big Data, the Internet of Things, machine learning, speech recognition – all these technologies and AI-related topics are already part of daily life. But as this continues to develop, how will organizations and individuals interact with the technology? How should it best be controlled and is it even possible to do so? The many risk implications of AI must be considered as more advanced creations become stronger and closer to reality every day.

 

 

  • Can we build AI without losing control over it? (Sam Harris) – Neuroscientist and philosopher Sam Harris is well-known for his commentaries on the interaction of science, morality, and society. Advanced AI is no longer just theoretical stuff of science fiction and the very distant future. Superintelligent AI – completely autonomous, superhuman machines, devices, and networks – is very close to reality. Technologists, the organizations in which they work, and the communities for which they create must all be conscientious about the development of these technologies and the assessment of the risks they could pose. Contending with the potential problems that stem from creating this very advanced AI needs to be done now, in anticipation of the technology, not later – when it may no longer be possible to control what has been designed and brought to “life.”   Planning, careful control frameworks, and regulatory supervision that balances openly encouraging innovation with soberly considering safety and risk consequences are all necessary to conscientiously embark upon these amazing technological endeavors.

 

 

  • What happens when our computers get smarter than we are? (Nick Bostrom) – In the same vein as the previous talk, one of the consequences of extremely “smart” artificial intelligence is that machine learning could be just as smart as a human being’s knowledge – and then, of course, eventually overtake humans in intelligence. This is alarming because it suggests the potential that humans could introduce their own subservience or obsolescence via machines created to make machines smarter. Again, all participants in developing this technology, including the consumers to whom it is ultimately directed, need to consider their intentions in bestowing machines with thought and balance the various risks carefully. With the ability for independent thought may also come the capacity for judgment. Humans must make an effort to ensure the values of these smart machines are consistent with those of humanity, in order to safeguard the relevance and survival of human knowledge itself for the future.

 

 

  • The wonderful and terrifying implications of computers that can learn (Jeremy Howard) – The concept of deep learning enables humans to teach computers how to learn. Through this technique, computers can transform into vast stores of self-generating knowledge. Many people will likely be very surprised to learn how far along this technology is, empowering machines with abilities and knowledge that some might think is still within the realm of fantasy. Productivity gains in application of machine learning have the potential to be enormous as computers can be trained to invent, identify, and diagnose. Computers can learn through algorithms and their own compounding teaching to do so many tasks that will free humans to test the limits of current inventions and to extend human problem-solving far beyond where it already reaches. This is certain to change the face of human employment – already bots and androids are being used for assisting tasks in diverse fields from human resources recruiting to nursing patient care.   Again, the extension of these technologies must be carefully cultivated in order to neutralize the existential threats to human society and life that may be posed by unchecked autonomy of machines and artificial learning. The time to do this is now, as soon as possible – not once the machines already have these advanced capabilities with all the attendant risks.

 

 

  • What will future jobs look like? (Andrew McAfee) – Picking up on the theme of the changing nature of human employment as machines get smarter, Andrew McAfee draws on his academic and intellectual background as an economist to unpack what the impact on the labor market might be. The fear, of course, is that extremely human-like androids will take over the human workforce with their advanced machine intelligence, making humans mostly irrelevant and out of work. The more interesting discussion, however, is not whether androids will take away work from humans but how they may change the kinds of jobs that humans do. Considering and preparing for this reality, and educating both humans and machines accordingly, is imperative to do now.

 

 

Check back here in the future for continuing commentary on AI and its impact on human life and society, including technology and the ethics of knowledge acquisition, as well as more insights on specific AI innovations such as self-driving cars and machine learning.

Categories
Trends in business compliance

Round-up on compliance of aging and death

Many of the contemporary challenges to the meaning of human life and the responsibility of organizations, individuals, regulators, and even governments to contend with them on a legal or regulatory level come from technology. Indeed, bioethics and design ethics are rich with ethical dilemmas caused by advancements of sophisticated technologies such as artificial intelligence and its many applications. However, there is one philosophical area that is in tension with societal existential constructs and is as old as life itself – aging and death.

The ethical dilemmas stemming from the legal and moral responsibilities humans have to themselves and each other as the end of life approaches are contentious and among the most difficult possible. These dilemmas go to the core of society’s moral ideas about the value of life, the extension of human rights throughout physical or mental incapacity due to age, and the treatment of patients and their bodies through and beyond death.

Legal guardians, funeral homes, hospitals, and other individuals and organizations working in and making profits from business related to aging and dying – encompassing legitimate activities as well as illicit ones – all have various duties to their clients and are subject to societal and legal expectations and norms. However, inspection and enforcement efforts are often uneven and struggle to keep pace with the challenges posed by abusive practices or organizational misconduct. Threats to the rights of individuals and the dignity and proper treatment – or at least clear and honest disclosures – that are expected by patients and their families, must be the focus of future regulatory scrutiny and improvement.

  • Overreaching paternalism in guardianship of senior citizens is a highly disturbing trend which has been enforced by the courts in some jurisdictions. Legal guardians pay themselves from their wards’ estates; in some cases they have hundreds or even thousands of clients and force out family members or friends so that they can exert their control and get paid for it. Of course, this is a necessary system for the care of vulnerable senior citizens who need help administering their affairs. However, it is also ripe for misuse by opportunistic individuals, to the great detriment of the seniors they take on as wards and their loved ones. The financial and social abuses that can occur in these cases are frightening and appalling. Legal guidelines and supervisory scrutiny of these guardians should be standardized across jurisdictions to avoid undue harm to any population and to balance the commercial caretaking aspects of the activity with the rights and dignity of the individuals concerned:  How the elderly lose their rights 
  • Funeral home regulation and inspection is currently a patchwork system at best. Gross abuses and lack of internal controls have been the subject of a number of recent investigatory reports. Employee misconduct or insufficient internal policies and procedures at an operation like a funeral home has obviously devastating potential to cause harm to families of departed individuals at a vulnerable and painful time in their lives. Following the loss of a loved one the thought that the personnel of the funeral home trusted with their body might store the remains improperly or misuse their organs and parts is a concept that is hard to even conceive. However, due to insufficient supervision and inconsistent regulatory and investigative practices, these terrible scenarios play out all too often. A coherent and cohesive regulatory framework with the strength to punish misconduct and enforce expectations of operating standards must be implemented:  Gruesome Discoveries at Funeral Homes Put Spotlight on Spotty Regulations
  • On a related note, the dark reality of the organ trade has been the subject of a number of recent investigatory reports as well. Far from just urban legends about crimes that can take place in far-off lands, body brokers are very real and operating in the United States. While many of them do conduct legitimate business for scientific or medical purposes, others trade illicitly or take advantage of individuals who unknowingly give their body parts upon death or those of their loved ones to be later sold for profit by brokers. Fraud and misrepresentation in this industry violates the dying wishes of individuals or the difficult decisions made by families. The ease with which these illicit transactions are conducted is shocking, with human limbs or organs being bought and sold like spare car parts by some individuals. Like funeral homes, an overarching regulatory system needs to be put in place to monitor and inspect these businesses and implement enforcement actions when necessary:  The Body Trade
  • Turning away from illicit or abusive activities to technological advancements that touch upon aging and death, the reach of artificial intelligence has begun to stretch into this area as well. Robots and robotic devices are no longer the figment of the imagination of a distant future. Many organizations are beginning to utilize them in rudimentary form for a variety of assistant-level activities and are trying to develop the AI technology to use it even more in the future. This extends to patient care as well; hospitals and nursing homes are now exploring using robots to assist nurses in treating patients. Machine learning may eventually be able to automate many aspects of basic care, removing human error and relieving non-robotic nurses to focus on more complex or individually-tailored care. This could be a great efficiency for hospital staffing in the future, but it remains to be seen how non-human interaction in the patient care arena will impact the aging experience. Compassion and humility are often of great mental importance when contending with the forces of aging and illness. A mix of human and robotic care of patients will need to be carefully devised to ensure that these needs are met: Hospitals Utilize Artificial Intelligence to Treat Patients
  • Life extension has been a romantic subject of philosophical and scientific desire for millennia. For as long as people have been alive, they have tried to figure out ways to prolong or prevent dying, sometimes delving deeply into the mysterious and esoteric. Current quests in this area are focused on high-tech solutions. Silicon Valley has turned its most sophisticated efforts toward life extension in seeking to “solve for death.” At the very least, these attempts may derive a technology that greatly impacts aging or pushes human life expectancies far beyond the current normal range. Within a generation this may the force of great societal change that will redefine the needs of aging populations that live for longer and continue the quest to avoid death completely: Seeking eternal life, Silicon Valley is solving for death

As demonstrated by the foregoing stories, improper practices and abuses of power, as well as technological advancements, pose risks to the nature of aging and death as it is currently defined within society. Supervisory frameworks must be developed and strengthened to protect the most vulnerable of individuals and ensure that they and their families are not treated unjustly. Risk assessments and coherent, holistic regulatory guidance should be in place to ensure that these protections are upheld.

Categories
Compliance in current and historical events

Design ethics of addictive technology

As social media platforms, the internet of things, and other online networks advance in sophistication and prevalence, the line between engagement and addiction becomes ever thinner. Features which are designed to make browsing the internet or using connected devices more comfortable, intuitive, and pleasurable are also vulnerable to misuse and abuse which can have highly negative impact on people’s daily routines and lives.

Indeed, the stereotypes of people too engrossed in their phones or tablets to even notice the people around them are widespread and real. So much of social interaction has been carried over into online communities and takes place on social media or in internet comment sections and forums. The positive possibilities of this kind of access to information and collaboration are boundless. Connecting across continents and sharing all kinds of information and ideas is powerful for learning, cooperation, and creativity. Making these systems better and more efficient for users to engage with only further empowers these uses. Designers, engineers, and technologists have taken the positive responses from users and implemented that feedback in coming up with new features and improvements with the aim of making the user interface and experience better.

Whether it’s making screens balanced with vivid images that are easy on the eyes or implementing machine-learning based algorithms that fill users’ feeds with the most interesting and entertaining information tailored for them, the original aim of these innovations is to make the platform or device more interesting to use and therefore to encourage the user to spend more time on it. This has obvious commercial appeal to the companies that create these networks and devices, their advertisers, and their other partners who are all competing to attract people’s attention and gain valuable impressions or content views. Time is money, and a faithful user is a lucrative one.

However, those eyeballs content providers and marketers wish to attract are, of course, inside the heads of people and therefore the ever-ramping effort to engage those people runs into risky territory where interest or active participation edges into dependency and addiction. There are countless studies which have shown health problems stemming from overuse of phones, tablets, computers, and other devices, including eye fatigue, migraines, sleep deprivation, and other problems related to vision, concentration, or stress caused by overindulgence in looking at screens. This is not to mention the destructive social impact that over-immersion in devices can have, isolating people from their families and communities as well interrupting work, diminishing traditional communication skills, and exposing people to online abuse and other unsafe or inappropriate content that could cause harm.

In fact, some of the individuals who have had the loudest voices against the dark side of the advancements of personal technology are in fact the designers and engineers who had a hand in actually creating the most addictive features. For example, the engineer who was involved in creating the Facebook “Like” button and the designer who worked on the “pull to refresh” mechanism first used by Twitter are among a growing group of technologists who have started to question and reject the role that immersive technologies play in their lives. These individuals understand the good intentions that were behind the original creation of these technologies, with the hope to make them more useful or fun for users, but they also see the downsides. Coined “refuseniks,” these early adopters have purposefully made efforts to diminish or balance the presence of technology in their lives. As many of these addictive behaviors center around the use of smartphones and applications on them, many of these people who designed these features and now speak out against them turn off notifications, uninstall particularly time-wasting applications, and even distance themselves physically from their phones by following strict personal rules about usage or cutting off access after certain times or in specific places.

The question remains – pioneers of these features may have matured within their own careers and lives enough to realize that their earlier intentions have destructive potential they don’t want to indulge personally. But how will companies creating products and services in this space balance this as public attention begins to more commonly acknowledge the problematic nature of these features? Being a refusenik cannot be the answer for everyone, as these devices and platforms do bring great value to their users and the world as a whole, despite the negative effect they can frequently also have. Organizations working in this space can take advantage of corporate social responsibility values to balance their innovation of new features with the expectations of how consumers can use them, for good or bad.

On an individual level, it is very helpful to take personal responsibility to acknowledge and understand how these platforms and technologies are designed to make people engaged and how that can turn to addiction. Being conscious of these features or tendencies in their use is key. People should push themselves to understand why and how they use these technologies before adopting and engaging in them. If they feel prone to misuse of it, then understanding the cause of it and exposure to it will help to mitigate its effects.

For an interesting perspective on high-tech designers and technologists who have rejected the technologies they sometimes played pivotal roles in creating, check out this article from The Guardian.

Categories
Trends in business compliance

Round-up on compliance issues with blockchain technology

One of the hottest topics of 2017 is blockchain. This advancing technology is seemingly the possible solution to every business problem conceivable. Companies across all industries – as diverse as banking to food production and seemingly everywhere in between – are experimenting with how they might be able to use blockchain to make their reporting and related processes more reliable or efficient. Many are even contemplating how they may take advantage of blockchain to market software applications to other companies, hoping to enter the profitable fintech (financial technology), regtech (regulatory technology), or suptech (supervisory technology) markets.

But what is blockchain? Most famously, it is the core technological component of the well-known cryptocurrencies, such as Bitcoin or Ethereum. Simply put, blockchain is an open list of records (which comprise the “blocks”) which are securely linked together with cryptography. As the blocks are all linked together and independently identified with references to their linked blocks, the data contained therein is extra safe from individual manipulation or alteration. This is a decentralized computing system which is incredibly useful for recordkeeping and records management activities, especially those where security is especially important such as identity management and medical records.

Due to the broad desirability of a secure and adaptable record maintenance technology, blockchain, which was initially developed only less than a decade ago, has been a disruptive influence in many industries already. Across all business areas, companies are looking to blockchain for possible benefits, all relevant to compliance, to their reporting processes.

  • Transparency for pension fund reporting is one major potential use of blockchain. Following the Madoff scandal and other highly-publicized frauds in the investment management industry, there has been more pressure than ever in expectations for investor protection and reporting disclosures. Many pension funds have balked at public and supervisory demands for increased transparency due to the cost concerns for implementing additional reporting mechanisms in balance with very low profit margins. This reaction does not help to enhance trust between investor clients and this fraud-vulnerable industry. Therefore the decentralized, secure nature of blockchain offers appealing opportunities for filling this confidence vacuum. Blockchain-based platforms can get investors access to their own pension information without fears of data manipulation or increased cost burden on firms: How Blockchain is revolutionizing fraud prone industries
  • On a related note, banks and other financial institutions have borne much of the competitive pressure blockchain has created with the advent of cryptocurrencies – but they also stand to benefit from this, if they can make the best of it. Cryptocurrencies such as Bitcoin are a compelling alternative to the centralized, traditional banking system for customers who desire extra security or anonymity. While cryptocurrencies have been traditionally depicted as a safe haven for illegitimate or even illegal payment activities, the mainstream attention on them has created a broader appeal and audience for them. As a response to the interest their customers have shown in cryptocurrencies, banks have started to delve into the potential for the blockchain technology. Some has invested in tech start-up companies concentrating on various blockchain applications while others have delved more deeply into relationships with fintech partners. At this point banks’ proprietary efforts have mostly been restricted to in-house research on potential use of blockchain, but inevitably competitive momentum will start to drive larger institutions toward developing their own projects in this space. These developments are likely to encourage efficiency, inspire leaner and more innovative business models, and serve the regtech and suptech goals of increasing cooperation with regulatory authorities. Ultimately this could help to modernize and improve the persistently staid and legacy-driven banking industry into a bolder and more transparent business model:  How banks and financial institutions are implementing blockchain technology
  • The advertising industry is newly subject to regulatory scrutiny with the upcoming EU privacy directive, the General Data Protection Regulation (GDPR). This law will apply to any organization doing business in, using technology in, or targeting the citizens of, any EU country, so it has a broad global reach. The GDPR will impose new requirements for handling and controlling private data, including protective and disclosure obligations. Therefore blockchain-based solutions, which can be both secure against manipulation or leakage, and distributed with open access so that users making disclosure requests can see the information directly for themselves. This will help to reduce the burden of this reporting as well as improve cost margins rather than coming up with expensive and vulnerable in-house solutions or outsourcing the reporting to third-parties with their own attendant risks: How Blockchains Can Help the Ad Industry Comply With the GDPR
  • Commercial aviation is another industry looking to blockchain systems to help with its risks – this time in cybersecurity management. Airlines and support companies rely a lot on IT systems to do everything from fly and direct aircraft to book and manage passenger travel. These systems are highly imperfect, as system outages and computer crashes that lead to flight cancellations and stranded passengers show in the news each year. They are also vulnerable to cybersecurity risks where intruders could breach personal data, disrupt airline operations, or corrupt and steal client and aircraft information. Storing and protecting this data within vulnerable or old/legacy systems poses many cybersecurity challenges. The concept of tamper-proof blockchain technology is therefore compelling to the aviation industry for these obvious reasons. Blockchain could help to keep operational data safe and protect companies from cyberattacks. More importantly, pressure to adopt it could drive aviation companies to make the difficult yet very important technological updates and improvements to their systems which will serve safety and regulatory concerns alike: How Blockchain, Cloud Can Reinforce Cybersecurity in Commercial Aviation
  • The pharmaceutical industry has long been vexed by inaccurate and unreliable supply chain tracking. It is especially vulnerable to stolen and counterfeit medication entering the supply chain untracked and finding its way to patients, putting their safety at risk. Tracking medicine with blockchain could change all this. A consortium of pharmaceutical companies, including major firms Genentech and Pfizer, are already collaborating together on a tool called the MediLedger Project, which seeks to manage the pharmaceutical supply chain and track medicines within it to ensure that drug deliveries are recorded accurately and transparently. This would take the current complicated and inefficient network of software management in the supply chain to the next level, securing the supply chain with an integrated and decentralized blockchain system. It could also enable sharing of essential information from companies to partners and customers without exposing sensitive business information, a challenge in the industry so far: Big Pharma Turns to Blockchain to Track Meds

There are many potential advantages from a compliance perspective to blockchain, which has the potential to enhance transparency, protect privacy, address various process-driven risks, and strengthen cybersecurity controls, among other benefits. As the technology advances time will tell how broad the applications of blockchain may be across these diverse industries with similar needs for compliance risk management.