Practical insights for compliance and ethics professionals and commentary on the intersection of compliance and culture.

Round-up on compliance issues with online platforms: Facebook

This is the second in a series of six posts on compliance issues with various online platforms.  Last week’s post was about YouTube.  Today’s post will be about Facebook.  Next week’s post will discuss Instagram.  The fourth post in the series, on March 29, will focus on Twitter.  The fifth post, on April 5, will be about Snapchat.  On April 12, the sixth and final post in the series will discuss Reddit.

The online social media site Facebook was created in 2004 and in the following years has become one of the most well-known online platforms. Facebook was originally created as a social networking service by and for Harvard University students and then expanded to the broader Ivy League and then general university community before opening up in 2006 to all users who meet the local minimum age requirement.  Since 2012, Facebook has been publicly-listed on the NASDAQ stock exchange.

Facebook’s rise to extreme popularity coincided with the disruptive innovations in Internet-enabled devices other than traditional computers, such as smartphones and tablets. Therefore as the site grew its user base it became an immersive and highly-engaging platform for people to share a wide variety of personal information, partake in social interactions, upload media such as photos or videos, and participate in community-based activities organized by profession, background, and interests.


Round-up on compliance issues with online platforms: YouTube

This is the first in a series of six posts on compliance issues with various online platforms.  Today’s post is about YouTube.  Next Tuesday’s post will be about Facebook.  The third post in the series, on March 22, will discuss Instagram.  The fourth post in the series, on March 29, will focus on Twitter.  The fifth post, on April 5, will be about Snapchat.  On April 12, the sixth and final post in the series will discuss Reddit.

The video hosting and sharing service YouTube was created in 2005 and is now owned by Google. YouTube contains content from both individuals as well as media corporation partners.  This content is extremely diverse, ranging from short clips to entire television shows and films as well as music videos, video blogs, live streams, and educational presentations.  YouTube also makes use of the advertising program Google AdSense and includes targeted ads on its content; most of the videos on YouTube are free to view but some ad content will appear before, during, and/or after the video plays.


Selected TED/TEDx talks on privacy and reputation

In an increasingly inter-connected and digital society, challenges to privacy and reputation are frequent.  Even before social media put everyone at constant pressure to “overshare,” when people’s very personal details were not always a quick Google search away, privacy was still under threat.  A person’s visibility and public representations are often judged and demanded for credibility and honesty evaluations performed by employers, potential partners, members of the community, and even complete strangers.  Giving up privacy in favor of radical openness may be the way some reality stars have attained their celebrity, but for many people this feels invasive and like a violation of security.

In a broader sense, people’s individual privacy settings in terms of what they wish to share or disclose, how, and to whom, have a direct bearing on reputation.  Cultural practices around privacy and information sharing can give rise to serious reputational risk that impacts individuals and communities and frays the social fabric in which transparency is desirable or even possible.  These norms and ethical expectations are intensified in the digital age, where an individual’s personal information can never truly be deleted or taken back once it is made public.


Compliance in Black Mirror Series 4

Black Mirror’s fourth season continues the themes of the previous three series of the show.  As discussed in this post, the show makes often uncanny connections between human life and technology, frequently covering the ways in which social media, AI, biometric devices, and other advanced technological systems and devices affect and change society.  What makes Black Mirror so effective, and often so disturbing, is that in each of the anthologized stories it contains not only a vision of the future but also a warning about the disruptions that would happen to people along the way.  The reality depicted in Black Mirror is like an amped-up version of the world that seems to be already nearly within reach, with technological advancements abound to make life easier or more entertaining.  However, the point of view in the show is markedly dystopian, forcing viewers to consider the addictive or even dangerous influence that immersive technologies could have.


Compliance in The Circle

The 2017 movie The Circle, based on the 2013 novel of the same name by Dave Eggers, is about the impact of commercial technology on human life.  It poses common ethical and moral questions about privacy and security in a time of interconnected information sharing via social media and networked devices. The movie is a thriller which centers around a tech giant that offers advanced products and services that have transformed the way people do business and interact with each other by placing all interactions on various platforms and networks with ratings and sharing capabilities.

While the high-tech immersion depicted in The Circle is not yet current reality, technology is developing at a breakneck pace and social media platforms, the Internet of the Things, and services driven by algorithms and other artificial intelligence and machine learning are increasingly ubiquitous with each passing day. At its core The Circle is concerned with overreach of these technologies and the companies that develop and market them, and the ethical problems and moral challenges that can arise from human and societal interaction with them.

  • Secrecy as dishonesty – One of the central philosophical proclamations of The Circle is when the protagonist, Mae, is confronted with a legal transgression she committed and in her reckoning with her actions states, “Secrets are lies.” Mae’s central thesis is that she would not have committed her crime if someone had been watching or aware of what she was doing. Therefore, the suggestion is that secrecy is a form of dishonesty. Disclosure, on the other hand, is the ultimate truthfulness and in this perspective, is valued over privacy. Privacy enables people to lie and conceal, and therefore leads to misconduct and distrust. Individuals giving up their expectations of privacy would supposedly lead to greater overall security and trust. The tension between liberty and safety is not an unfamiliar one in society. The dilemma of which takes precedence will be an on-going and dominant moral dilemma.



  • Transparency overload – It’s easy to agree that transparency and openness encourages honesty and communication. Clear and public disclosure of organizational activities and values provide strong incentives for making the best ethical decisions and keeping integrity in mind when planning business strategy. However, the admirable mission of transparency is subject to subversion, as The Circle Claims of public transparency can be selective, creating an impression of a company that values openness and progressive values when in reality it is picking and choosing disclosures while hiding malfeasance and abuse behind this self-selected façade. Also, going too far in claiming transparency on a personal level can be too much of a good thing. As above, the tension between personal privacy and public disclosure is a delicate balance which must be worked thoughtfully.



  • Surveillance and consent – In promotion of corporate and societal values of transparency and shared disclosure, the company in The Circle introduces a service where tiny cameras are embedded everywhere out in the world. Some of the cameras are installed intentionally by users who wish to share, but others are placed in a variety of public locations without notification or permission to do so. The video streaming from the cameras are publicly available online for searching, indexing, and manipulation. While being able to see a high-definition and flexible feed of the surf at a beach is appealing for a number of reasons, cameras everywhere in public, regardless of their utility or entertainment value, can also be used by both private and public concerns to conduct surveillance. As these cameras are in some cases posted without consent or knowledge, this surveillance is vulnerable to unintended uses and can represent, as above, serious risks to personal rights and privacy expectations.



  • Cybersecurity – The company in The Circle develops, markets, and sells a technology service. Therefore the people who buy what they market are not only purchasers or customers but also users. They have heightened expectations and rights for protection by the company as such. Not only is the extent to which their data is collected by the company questionable (even when the users are intentionally sharing it in an excessive or imprudent manner), but the company also is obligated to store it, and may violate individuals’ rights by viewing it, accessing it, analyzing it, or not keeping it safe from intrusions by and alterations, deletions, or other misuses of, its employees or third parties. Cybersecurity risk management is a huge challenge for a company such as this one, which is clearly putting its commercial and societal ambitions over any fundamental value of information security that is discernible.



  • Unethical decision-making – While the titular company in The Circle repeatedly suggests that transparency can be a force for good and should be leveraged for this purpose by the widespread use of what boils down to be surveillance technology, reality of how humans use this technology show that its use and influence is not straightforwardly positive at all. Quite to the contrary, on many occasions in the movie disclosures and discoveries due to the technology are harmful to individuals and relationships. Despite the desire to incentivize honesty and normalize total disclosure, people end up getting hurt, both because of their own overzealous adoption of the technology and of the actions of others. In the most dramatic example of this, a person dies due to a series of events kicked off by a crowd-sourced surveillance operation performed at a company demonstration of their new service. Unethical decision-making, both in questionable design ethics by the organization and in immoral behavior by user-individuals, directly causes these tragic and disturbing events.



There are many ethical and moral dilemmas posed by availability of advanced technology which can encroach about privacy, security, and consent of individuals. Transparency, surveillance, and risks to information security and from cybersecurity are all common themes of The Circle as well.


Compliance and social media influencers

Influencer marketing has become a major trend in the advertising industry with the increasing dominance of social media and blog networks in the media landscape. With influencer marketing, brands and their advertising agencies identify the individuals to whom certain demographic groups look to for suggestions on trends or products and services to purchase. These individuals, referred to as “influencers,” then share or produce editorial content for their followers (the people who like or connect to them on social media networks) or engage in the brand’s marketing activities.

Through these sorts of campaigns, both the brands and the influencers hope to gain a non-traditional advantage in appealing to a wider audience. From the brand perspective, they get creative and incredibly targeted content that is produced on a bespoke basis for very specific consumers who are already engaged and interested in the channel through which the content is shared. Through the detailed metrics that are abundantly available via social media and blogs, advertisers can determine which campaigns were successful in spurring either interest or actual sales. From the influencer perspective, they get opportunities to generate paid content and engage with their followers and fans in a novel way. Relationships with brands can be very lucrative for influencers, especially if they become long-term, and can drive significant, much-desired traffic for blogs and social media posts that brings attention to other content the influencer has to offer.

From the above, it is evident that along with all the opportunity comes a complex set of interests which may end up in conflict or give rise to concerns about business practices and accuracy of representations and disclosures. For influencers in particular, blurring the line between the position a follower or a fan, which is even on some networks referred to colloquially as a “friend,” and the position of a customer or a referral, complicates an informal relationship where few duties are owed. Instead, these interactions can occasionally be viewed as a commercial relationship where much more responsibility exists and can be potentially breached.

  • In the United States, the Federal Trade Commission (FTC) is one of those regulators who is contemplating stronger restraints in the practices of influencer marketing. The main area of the FTC’s concern centers on disclosure of the relationships between brands and blogger influencers. Without full, clear disclosures, consumers cannot make reliable, informed choices about purchases they may be influenced to make due to influencer marketing content. The FTC hopes to protect customers from being misled or ripped off entirely by influencer marketing that is targeted to them without providing them with the necessary disclosures for them to make ethical and financially-wise decisions. The FTC has already informed influencers and advertisers that disclosure of relationships between them must be “clear and conspicuous,” with posts that paid promotions clearly indicated as such so that they are not lost within the influencer’s unpaid content that engaging with would not lead to a directly-linked commercial interaction. These regulations have been around for some time, but the extra enthusiasm for enforcing them protectively will have a much bigger impact on the market going forward: Regulating influencers: What retailers need to know about the regulatory crackdown
  • The SEC also has influencer marketing on its regulatory enforcement docket. This is an interesting clash of social media advertising etiquette and investor protection priorities. Companies offering trading of cryptocurrencies have begun to rely on celebrities for endorsements. Much of influencer marketing is done in “testimonial” style, so this medium lends well to a celebrity sharing his or her preferences with thousands or millions of followers. When that preference is for a cryptocurrency investment, however, the endorsement may run afoul of proper disclosure expectations. These regulatory expectations for cryptocurrencies are still evolving, as the market for initial coin offerings (ICOs) is in its infancy still and nearly everything that happens with cryptocurrencies is new, with its impact on banking, the markets, and investors unproven as of yet. Central banks and regulators have taken wildly different approaches in different countries to handling demand for and developments in cryptocurrencies. In the US, this approach has been cautious and restrained, but one area in which the supervisors have not been quiet has been to protect potential investors from advertisements without appropriate disclosures: SEC warns celebrities over endorsing ICOs without proper disclosure
  • Brands and influencers aren’t the only ones who may need to meet a higher disclosure standard when it comes to advertisements that aren’t immediately identifiable as such. Hidden marketing on social media sites as just as insidious as the political advertising that has received so much attention in the press recently. As Congress pushes social media platforms like Facebook to make clearer disclosures about and take more monitoring and control responsibility for the advertisements that appear on their sites, the need to build in protections against deceptive actions by marketers and their partners is urgent as well: It’s not just Facebook’s Russian ads: Hidden advertising is pervasive and growing
  • Social media compliance enforcement will be a major priority for the FTC in this regulatory environment. It should be expected that even within regulatory rollbacks in other areas, the FTC will continue to pay attention to possible non-compliant social media posts and advertisers and their related influencers could be subject to formal enforcement actions. Compared to some other industries like banking or pharmaceuticals, advertising agencies are subject to a relatively sparse supervisory agenda. This light regulatory touch may change dramatically if the FTC chooses to extend and entrench investigation and enforcement efforts on influencer marketing. This is worrying for the influencers as well, who are even less likely than advertising agencies or marketing divisions of brands to have fully-formed compliance programs and to be ready to have the record-keeping and other regulatory controls they may need in place and up to speed: How to Comply with FTC Social Media ‘Influencer’ Rules
  • For more on influencer marketing and the way that brands, advertisers, and influencers may use it to spread content in the future, check out this 2018 forecast for possible trends in the practice, which will in turn dictate the ensuing regulatory priorities, from Forbes: The Influencer Marketing Trends That Will Dominate 2018

Given these potential developments and risks, it is definitely not premature to direct appropriate and pro-active compliance attention to the cultivation and use of influencer marketing networks. Regulatory and supervisory entities are already starting to consider cracking down on various marketing activities in this sphere, and enforcement of disclosure and reporting standards will become robust and should be aided by proper control frameworks.



Compliance in Black Mirror

Black Mirror is a very popular US-UK television science fiction series. It originally aired on Channel 4 in the UK and is now released and broadcasted by the subscription video streaming service Netflix. The series is anthology-style, with short seasons of stand-alone episodes which are like mini films. Most of the episodes of the series touch upon the dominance of and overreach into human life by technology, such as social media, AI, and other advanced, immersive systems and devices. The take offered is quite dramatic, often delving deeply into adverse psychological and sociological effects on modern society, taking a dark and even dystopian perspective.

While all the episodes of Black Mirror do depict a future reality, it is an immediate and accessible reality impacted by technology exceeding that which is currently possible but not so much as to be unthinkable. Indeed, the title of the show, Black Mirror, refers to current technology which is increasingly ubiquitous and addictive – television screens, computer monitors, and smartphone displays. The show both entices with the idea that many of these technological advancements could be convenient or novel or life-enhancing, while also warning that the obsessive and addictive aspects of technology could cause great harm and disruption if not developed and managed thoughtfully and carefully with the risks well in mind.

  • “The Entire History of You” (Series 1, Episode 3): In this episode, a couple struggling with mistrust and insinuations of infidelity make disastrous use of a common biometric – a “grain” implant everyone has that records everything they see, hear, and do. The recordings on the implants can be replayed via “re-dos.” This is used for surveillance purposes by security and management, as the memories can be played to an external video monitor for third parties to watch. Individuals can also watch the re-dos from their implants directly in their eyes, which allows them to repeatedly watch re-dos, often leading them to question and analyse the sincerity and credibility of people with whom they interact. People can also erase the records from their implants, altering the truthfulness of the recordings. This troubles the status of trust and honesty in society which has already in contemporary life been eroded by the influence of the internet.




  • “Be Right Back” (Series 2, Episode 1): In this episode, Martha is mourning her boyfriend, Ash, who died in a car accident. As she struggles to deal with his loss, her friend who has lso lost a partner recommends an online service that allows people to stay in touch with dead loved ones. The service crawls the departed person’s e-mail and social media profiles to create a virtual version of the person. After the machine learning advances enough by consuming and trying enough communications, it can also digest videos and photos by graduating from chatting via instant message to replicating the deceased’s voice and talking on the phone. At its most advanced, the service even allows a user to create an android version of the deceased that resembles him or her in every physical aspect and imitates the elements of the dead person’s personality that can be discovered by the online record. However, in all this there is no consideration given to the data privacy of the deceased person or to his or her consent to be exposed to machine learning and replicated in this manner, including even the physical android form.



  • “Nosedive” (Series 3, Episode 1): This is one of the most popular, critically-acclaimed episodes of the series, and one of the obvious reasons for this is that it focuses on social media and how it impacts friendships and interactions. The addictive aspects of social media in current times are already a hot topic in design ethics, driving people to question whether social media networks like Facebook or Twitter are good for the people who use them, and where to locate the line between entertainment and a fun way to connect and share, versus a platform with a potentially dark and abusive impact on users. In this episode, everyone is on social media and is subject to receiving ratings from virtually everyone they encounter. These ratings determine people’s standing both on social media and in the real world as well – controlling access to jobs, customer service, housing, and much more. Anxieties and aspirations about ratings drive everything people do and all the choices they make. “Addictive” has been met and surpassed, with social media having an absolutely pervasive impact in everyone’s lives.



  • “San Junipero” (Series 3, Episode 4): One of the most universally loved episodes of Black Mirror, San Junipero depicts the titular beach town which mysteriously appears to shift in time throughout the decades. Kelly and Yorkie both visit the town and have a romance. San Junipero turns out to be a simulated reality which exists only “on the cloud,” where people who are at the end of their lives or who have already died can visit to live in their prime again, forever if they so choose. In the real world, Kelly is elderly and in hospice care, while Yorkie is a comatose quadriplegic. Both eventually chose to be euthanized and uploaded to San Junipero to be together forever, after getting married first so that Kelly can give legal authorization to Yorkie to pass over. The bioethical considerations of such a reality are clear – in this society, assisted suicide is a legal normalcy, and part of patient care is planning one’s method of death and treatment path after death, which digitalization being a real option. All of the San Junipero simulations exist on huge servers, and judging by how many lights are flickering in the racks this seems to be a popular practice – but what about cybersecurity and information security of the simulations? What if the servers were hacked or damaged? This gives a new meaning to humanity and places an entirely different type of pressure on making sure that technology is used safely and the data stored on it is protected.



  • “Men Against Fire” (Series 3, Episode 5): This episode concerns the future of warfare in a post-apocalyptic world. Soldiers all have a biometric implant called MASS that augments reality, enhances their senses, and provides virtual reality experiences. One soldier’s implant begins to malfunction and he soon learns that the MASS is in fact altering his senses so that he will not see individuals he is told are enemy combatants as people. It turns out that the soldier is part of a eugenics program practicing worldwide genocide and the MASS is being used to deceive the solders and turn them into autonomous weapons who murder on command due to the augmentations and alterations to reality by the MASS. This storyline falls cannily close to many current concerns about the adoption of autonomous weapons that are not directed or monitored by humans, which are nearly within technological capability to be created and are the subject of international calls for appropriate supervision of and restraint in their development.



Black Mirror offers many interesting scenarios for analysis of and study by compliance and ethics professionals considering risk management related to the use of technology in organizations and society. As described above, surveillance, data privacy, consent, design ethics, autonomous weapons and other AI, bioethics, and cybersecurity are just a sampling of the issues invoked by episodes of the series.


Design ethics of addictive technology

As social media platforms, the internet of things, and other online networks advance in sophistication and prevalence, the line between engagement and addiction becomes ever thinner. Features which are designed to make browsing the internet or using connected devices more comfortable, intuitive, and pleasurable are also vulnerable to misuse and abuse which can have highly negative impact on people’s daily routines and lives.

Indeed, the stereotypes of people too engrossed in their phones or tablets to even notice the people around them are widespread and real. So much of social interaction has been carried over into online communities and takes place on social media or in internet comment sections and forums. The positive possibilities of this kind of access to information and collaboration are boundless. Connecting across continents and sharing all kinds of information and ideas is powerful for learning, cooperation, and creativity. Making these systems better and more efficient for users to engage with only further empowers these uses. Designers, engineers, and technologists have taken the positive responses from users and implemented that feedback in coming up with new features and improvements with the aim of making the user interface and experience better.

Whether it’s making screens balanced with vivid images that are easy on the eyes or implementing machine-learning based algorithms that fill users’ feeds with the most interesting and entertaining information tailored for them, the original aim of these innovations is to make the platform or device more interesting to use and therefore to encourage the user to spend more time on it. This has obvious commercial appeal to the companies that create these networks and devices, their advertisers, and their other partners who are all competing to attract people’s attention and gain valuable impressions or content views. Time is money, and a faithful user is a lucrative one.

However, those eyeballs content providers and marketers wish to attract are, of course, inside the heads of people and therefore the ever-ramping effort to engage those people runs into risky territory where interest or active participation edges into dependency and addiction. There are countless studies which have shown health problems stemming from overuse of phones, tablets, computers, and other devices, including eye fatigue, migraines, sleep deprivation, and other problems related to vision, concentration, or stress caused by overindulgence in looking at screens. This is not to mention the destructive social impact that over-immersion in devices can have, isolating people from their families and communities as well interrupting work, diminishing traditional communication skills, and exposing people to online abuse and other unsafe or inappropriate content that could cause harm.

In fact, some of the individuals who have had the loudest voices against the dark side of the advancements of personal technology are in fact the designers and engineers who had a hand in actually creating the most addictive features. For example, the engineer who was involved in creating the Facebook “Like” button and the designer who worked on the “pull to refresh” mechanism first used by Twitter are among a growing group of technologists who have started to question and reject the role that immersive technologies play in their lives. These individuals understand the good intentions that were behind the original creation of these technologies, with the hope to make them more useful or fun for users, but they also see the downsides. Coined “refuseniks,” these early adopters have purposefully made efforts to diminish or balance the presence of technology in their lives. As many of these addictive behaviors center around the use of smartphones and applications on them, many of these people who designed these features and now speak out against them turn off notifications, uninstall particularly time-wasting applications, and even distance themselves physically from their phones by following strict personal rules about usage or cutting off access after certain times or in specific places.

The question remains – pioneers of these features may have matured within their own careers and lives enough to realize that their earlier intentions have destructive potential they don’t want to indulge personally. But how will companies creating products and services in this space balance this as public attention begins to more commonly acknowledge the problematic nature of these features? Being a refusenik cannot be the answer for everyone, as these devices and platforms do bring great value to their users and the world as a whole, despite the negative effect they can frequently also have. Organizations working in this space can take advantage of corporate social responsibility values to balance their innovation of new features with the expectations of how consumers can use them, for good or bad.

On an individual level, it is very helpful to take personal responsibility to acknowledge and understand how these platforms and technologies are designed to make people engaged and how that can turn to addiction. Being conscious of these features or tendencies in their use is key. People should push themselves to understand why and how they use these technologies before adopting and engaging in them. If they feel prone to misuse of it, then understanding the cause of it and exposure to it will help to mitigate its effects.

For an interesting perspective on high-tech designers and technologists who have rejected the technologies they sometimes played pivotal roles in creating, check out this article from The Guardian.


Instagram and the internet’s code of ethics

Instagram is a very popular social media app based on sharing photos and videos, publicly and to selected users as well as via direct, private message. It was launched in 2010 and since April 2012 has been owned by Facebook, another giant in the social media industry. In less than the decade of its existence, Instagram has grown a very large and active community, where users can interact with their friends and “followers” as well as other communities who maintain a presence there, public figures, media sources, and corporate brands.

All of these wildly different groups, from all over the world, sharing content and commentary on one platform, is exciting and promises many opportunities for collaboration. Along with these positive connections, though, of course come negative surprises and possibilities for challenges and abuses. With all the influence Instagram has through its popularity comes also responsibility for defining the standards and limitations of the community as well as what it will put out into the internet and the world.

Instagram has faced its share of criticism for its efforts to implement and maintain effective controls and reporting mechanisms.   Instagram relies heavily on user reporting of inappropriate content, such as posts depicting illegal activity or the use of “coded” hashtags and emojis to conceal but continue on with such practices. Understandably, even the most aggressive attempts to keep up with the pace of this behavior on social media will fall behind quickly, leading to criticism the community is unsafe. When Instagram is too proactive or reaches in deleting comments, posts, or users, however, then controversy about overreaching into privacy and expression begins in response.

Kevin Systrom, one of the original creators of Instagram and its current CEO, wants to work this balance between protection from abuse and freedom of expression. Under his leadership, Instagram is dedicated to ensuring that the content and tone on the platform is compliant with its community guidelines. Changes to the comments sections on photos – including allowing users to filter out comments that had certain words, or to post photos without comment sections available – are intended to encourage safer self-expression by the posters who might otherwise fear harassment or offensive content in response below their photos.

Platforms such as Instagram, of course, can never be neutral – any technology’s relationship with its user is one that is fraught with moral concerns, starting right at the ethics of its design, which is made only more complex by algorithms, robot users, and the real users who make their own decisions about the content to share and promote that run the gamut from universally appropriate to offensive, harassing, or even illegal. In such a context, applying a code of ethics is a very hard task, but perhaps it is the inherent difficulty of doing this that makes it so important to try.

Creating filters and tools to hide and promote, prevent and engage, either when deployed by the community management behind the scenes or when elected by users, is just the beginning of the design choices engineers have made at Instagram to implement technical responses to problematic tone in some corners of the platform. Instagram tries to deploy artificial intelligence to help also, to sort real posts from fake and to learn from the data to understand why innocent comments or content may be abusive to the context, a concept called word embeddings. AI has its limitations, of course, but in any rules-based approach to governance it’s necessary to start with something good and then make continual efforts to make it better, rather than leave risks un-addressed while in hopeful pursuit of the best.

Time will tell how effective Instagram’s efforts to make the platform a safer place for expression really are, and what they really accomplish – a place which is open for creative sharing and communication creation, but not to toxicity and abuse, or a censored, sanitized, disingenuous photo collection where self-expression is restricted and speech censored? Perhaps Instagram will succeed in going against the tide on the internet and in much of life, where the level of social discourse seems to have gone low, tinged by anger and dark with people’s worst impulses, and make a place where the conversation can be a bit more civil, even if it has to be filtered first to get there.

For more detail on Kevin Systrom’s ambition of making Instagram a safe haven and role model platform on the internet, and the challenges that both motivate and complicate this mission, see Nicholas Thompson’s story on Wired.


Round-up on ethics of design in technology

One of the most interesting and challenging inquiries in the evolving ethical code of technology has to do with design choices. Ethical decision-making and process design has direct impact on the fluid, complex process of creating the devices, interfaces, and systems that are brought to market and used by consumers on a constant basis. In such a disruptive and innovative industry, there are moral costs for every design decision: every new creation replaces or changes an existing one, and for everyone who has new access or benefits, others experience the costs of these decisions. Therefore the ethics of design as applied to technology and, of particular interest, social media, have concrete importance for everyone living in a world increasingly dominated by user experiences, communities’ terms of service, and smart devices.

  • Former Google product manager Tristan Harris has gone viral with his commentary on the ethics of design in smart phones and platforms creating apps for them. There is a balance in online design where the internet platforms go from being useful or intuitive to encouraging interruption and even obsession. Many people worry about the effect “screen time” may have on their attention span, quality of sleep, and offline interactions with people. Design techniques may actually keep people attached to their devices in a constant loop of advertisements, notifications, and links, as content providers and platforms compete to grab viewers’ attention. Alerting people to the control their devices have over their attention and time is one step, but urging more ethical choices in the design process is the next frontier for innovation reform:  Our Minds Have Been Hijacked By Our Phones.  Tristan Harris Wants To Rescue Them. 
  • The above phenomenon of addictive design has become so imbedded in the creation of app features that even the most subtle changes can have a huge impact on the consumption practices of users. But when do features go from entertaining and user-friendly to compulsive, even addictive? Refreshing an app can be like pulling the lever on a slot machine, giving the brain rewards in the form of new content to keep the loop going at the expense of other activities and priorities. These design improvements, then, may actually affect users more as manipulations:  Designers are using “dark UX” to turn you into a sleep-deprived internet addict
  • These small, ongoing redesigns are intended to make apps more readable and consumable. These periodic improvements are intended to make content more captivating and enable longer browsing – again prompting the question, what is the ethical code for the control designers wield over users with these choices? From a design ethics perspective, these small changes can be viewed as more alarming than major ones, as they are so incremental that many users do not consciously notice them and therefore “optimization” tips into “over-optimization,” meaningful interaction becoming possibly destructive:  Facebook and Instagram get redesigns for readability
  • Artificial intelligence always captures the public’s imagination – thrills and fears about the possible developing capabilities of robots and predictive algorithms that could direct and define – and perhaps threaten – human existence in the future. AI has been developing in recent years at a breakneck pace, and all indications are that this innovation will continue or multiply in the coming period. The science fiction-esque impact of AI on society will grow and bring with it all kinds of ethical concerns about the abilities of humans to define and control it in a timely and effective way:  Ethics — the next frontier for artificial intelligence
  • Social media platforms have developed into social systems, with all the dilemmas and dynamics that come along with that. These networks may face the choice between engagement and all of the thorny dialogs that come with it, and a simpler, more remote model that can be enjoyable but is less interactive and therefore, perhaps, less provocative:  ‘Link in Bio’ Keeps Instagram Nice

Queries into design ethics and choice theory in technology, especially social media, ask the questions of what human experience will evolve into in a world which is increasingly digitized and networked. The design decisions made in the creation of these devices and systems require an ethical code and a sense of social responsibility in order to define the boundaries of what are the best collective choices.