Compliance and social media influencers

Influencer marketing has become a major trend in the advertising industry with the increasing dominance of social media and blog networks in the media landscape. With influencer marketing, brands and their advertising agencies identify the individuals to whom certain demographic groups look to for suggestions on trends or products and services to purchase. These individuals, referred to as “influencers,” then share or produce editorial content for their followers (the people who like or connect to them on social media networks) or engage in the brand’s marketing activities.

Through these sorts of campaigns, both the brands and the influencers hope to gain a non-traditional advantage in appealing to a wider audience. From the brand perspective, they get creative and incredibly targeted content that is produced on a bespoke basis for very specific consumers who are already engaged and interested in the channel through which the content is shared. Through the detailed metrics that are abundantly available via social media and blogs, advertisers can determine which campaigns were successful in spurring either interest or actual sales. From the influencer perspective, they get opportunities to generate paid content and engage with their followers and fans in a novel way. Relationships with brands can be very lucrative for influencers, especially if they become long-term, and can drive significant, much-desired traffic for blogs and social media posts that brings attention to other content the influencer has to offer.

From the above, it is evident that along with all the opportunity comes a complex set of interests which may end up in conflict or give rise to concerns about business practices and accuracy of representations and disclosures. For influencers in particular, blurring the line between the position a follower or a fan, which is even on some networks referred to colloquially as a “friend,” and the position of a customer or a referral, complicates an informal relationship where few duties are owed. Instead, these interactions can occasionally be viewed as a commercial relationship where much more responsibility exists and can be potentially breached.

  • In the United States, the Federal Trade Commission (FTC) is one of those regulators who is contemplating stronger restraints in the practices of influencer marketing. The main area of the FTC’s concern centers on disclosure of the relationships between brands and blogger influencers. Without full, clear disclosures, consumers cannot make reliable, informed choices about purchases they may be influenced to make due to influencer marketing content. The FTC hopes to protect customers from being misled or ripped off entirely by influencer marketing that is targeted to them without providing them with the necessary disclosures for them to make ethical and financially-wise decisions. The FTC has already informed influencers and advertisers that disclosure of relationships between them must be “clear and conspicuous,” with posts that paid promotions clearly indicated as such so that they are not lost within the influencer’s unpaid content that engaging with would not lead to a directly-linked commercial interaction. These regulations have been around for some time, but the extra enthusiasm for enforcing them protectively will have a much bigger impact on the market going forward: Regulating influencers: What retailers need to know about the regulatory crackdown
  • The SEC also has influencer marketing on its regulatory enforcement docket. This is an interesting clash of social media advertising etiquette and investor protection priorities. Companies offering trading of cryptocurrencies have begun to rely on celebrities for endorsements. Much of influencer marketing is done in “testimonial” style, so this medium lends well to a celebrity sharing his or her preferences with thousands or millions of followers. When that preference is for a cryptocurrency investment, however, the endorsement may run afoul of proper disclosure expectations. These regulatory expectations for cryptocurrencies are still evolving, as the market for initial coin offerings (ICOs) is in its infancy still and nearly everything that happens with cryptocurrencies is new, with its impact on banking, the markets, and investors unproven as of yet. Central banks and regulators have taken wildly different approaches in different countries to handling demand for and developments in cryptocurrencies. In the US, this approach has been cautious and restrained, but one area in which the supervisors have not been quiet has been to protect potential investors from advertisements without appropriate disclosures: SEC warns celebrities over endorsing ICOs without proper disclosure
  • Brands and influencers aren’t the only ones who may need to meet a higher disclosure standard when it comes to advertisements that aren’t immediately identifiable as such. Hidden marketing on social media sites as just as insidious as the political advertising that has received so much attention in the press recently. As Congress pushes social media platforms like Facebook to make clearer disclosures about and take more monitoring and control responsibility for the advertisements that appear on their sites, the need to build in protections against deceptive actions by marketers and their partners is urgent as well: It’s not just Facebook’s Russian ads: Hidden advertising is pervasive and growing
  • Social media compliance enforcement will be a major priority for the FTC in this regulatory environment. It should be expected that even within regulatory rollbacks in other areas, the FTC will continue to pay attention to possible non-compliant social media posts and advertisers and their related influencers could be subject to formal enforcement actions. Compared to some other industries like banking or pharmaceuticals, advertising agencies are subject to a relatively sparse supervisory agenda. This light regulatory touch may change dramatically if the FTC chooses to extend and entrench investigation and enforcement efforts on influencer marketing. This is worrying for the influencers as well, who are even less likely than advertising agencies or marketing divisions of brands to have fully-formed compliance programs and to be ready to have the record-keeping and other regulatory controls they may need in place and up to speed: How to Comply with FTC Social Media ‘Influencer’ Rules
  • For more on influencer marketing and the way that brands, advertisers, and influencers may use it to spread content in the future, check out this 2018 forecast for possible trends in the practice, which will in turn dictate the ensuing regulatory priorities, from Forbes: The Influencer Marketing Trends That Will Dominate 2018

Given these potential developments and risks, it is definitely not premature to direct appropriate and pro-active compliance attention to the cultivation and use of influencer marketing networks. Regulatory and supervisory entities are already starting to consider cracking down on various marketing activities in this sphere, and enforcement of disclosure and reporting standards will become robust and should be aided by proper control frameworks.

 

Compliance in Black Mirror

Black Mirror is a very popular US-UK television science fiction series. It originally aired on Channel 4 in the UK and is now released and broadcasted by the subscription video streaming service Netflix. The series is anthology-style, with short seasons of stand-alone episodes which are like mini films. Most of the episodes of the series touch upon the dominance of and overreach into human life by technology, such as social media, AI, and other advanced, immersive systems and devices. The take offered is quite dramatic, often delving deeply into adverse psychological and sociological effects on modern society, taking a dark and even dystopian perspective.

While all the episodes of Black Mirror do depict a future reality, it is an immediate and accessible reality impacted by technology exceeding that which is currently possible but not so much as to be unthinkable. Indeed, the title of the show, Black Mirror, refers to current technology which is increasingly ubiquitous and addictive – television screens, computer monitors, and smartphone displays. The show both entices with the idea that many of these technological advancements could be convenient or novel or life-enhancing, while also warning that the obsessive and addictive aspects of technology could cause great harm and disruption if not developed and managed thoughtfully and carefully with the risks well in mind.

  • “The Entire History of You” (Series 1, Episode 3): In this episode, a couple struggling with mistrust and insinuations of infidelity make disastrous use of a common biometric – a “grain” implant everyone has that records everything they see, hear, and do. The recordings on the implants can be replayed via “re-dos.” This is used for surveillance purposes by security and management, as the memories can be played to an external video monitor for third parties to watch. Individuals can also watch the re-dos from their implants directly in their eyes, which allows them to repeatedly watch re-dos, often leading them to question and analyse the sincerity and credibility of people with whom they interact. People can also erase the records from their implants, altering the truthfulness of the recordings. This troubles the status of trust and honesty in society which has already in contemporary life been eroded by the influence of the internet.

 

 

 

  • “Be Right Back” (Series 2, Episode 1): In this episode, Martha is mourning her boyfriend, Ash, who died in a car accident. As she struggles to deal with his loss, her friend who has lso lost a partner recommends an online service that allows people to stay in touch with dead loved ones. The service crawls the departed person’s e-mail and social media profiles to create a virtual version of the person. After the machine learning advances enough by consuming and trying enough communications, it can also digest videos and photos by graduating from chatting via instant message to replicating the deceased’s voice and talking on the phone. At its most advanced, the service even allows a user to create an android version of the deceased that resembles him or her in every physical aspect and imitates the elements of the dead person’s personality that can be discovered by the online record. However, in all this there is no consideration given to the data privacy of the deceased person or to his or her consent to be exposed to machine learning and replicated in this manner, including even the physical android form.

 

 

  • “Nosedive” (Series 3, Episode 1): This is one of the most popular, critically-acclaimed episodes of the series, and one of the obvious reasons for this is that it focuses on social media and how it impacts friendships and interactions. The addictive aspects of social media in current times are already a hot topic in design ethics, driving people to question whether social media networks like Facebook or Twitter are good for the people who use them, and where to locate the line between entertainment and a fun way to connect and share, versus a platform with a potentially dark and abusive impact on users. In this episode, everyone is on social media and is subject to receiving ratings from virtually everyone they encounter. These ratings determine people’s standing both on social media and in the real world as well – controlling access to jobs, customer service, housing, and much more. Anxieties and aspirations about ratings drive everything people do and all the choices they make. “Addictive” has been met and surpassed, with social media having an absolutely pervasive impact in everyone’s lives.

 

 

  • “San Junipero” (Series 3, Episode 4): One of the most universally loved episodes of Black Mirror, San Junipero depicts the titular beach town which mysteriously appears to shift in time throughout the decades. Kelly and Yorkie both visit the town and have a romance. San Junipero turns out to be a simulated reality which exists only “on the cloud,” where people who are at the end of their lives or who have already died can visit to live in their prime again, forever if they so choose. In the real world, Kelly is elderly and in hospice care, while Yorkie is a comatose quadriplegic. Both eventually chose to be euthanized and uploaded to San Junipero to be together forever, after getting married first so that Kelly can give legal authorization to Yorkie to pass over. The bioethical considerations of such a reality are clear – in this society, assisted suicide is a legal normalcy, and part of patient care is planning one’s method of death and treatment path after death, which digitalization being a real option. All of the San Junipero simulations exist on huge servers, and judging by how many lights are flickering in the racks this seems to be a popular practice – but what about cybersecurity and information security of the simulations? What if the servers were hacked or damaged? This gives a new meaning to humanity and places an entirely different type of pressure on making sure that technology is used safely and the data stored on it is protected.

 

 

  • “Men Against Fire” (Series 3, Episode 5): This episode concerns the future of warfare in a post-apocalyptic world. Soldiers all have a biometric implant called MASS that augments reality, enhances their senses, and provides virtual reality experiences. One soldier’s implant begins to malfunction and he soon learns that the MASS is in fact altering his senses so that he will not see individuals he is told are enemy combatants as people. It turns out that the soldier is part of a eugenics program practicing worldwide genocide and the MASS is being used to deceive the solders and turn them into autonomous weapons who murder on command due to the augmentations and alterations to reality by the MASS. This storyline falls cannily close to many current concerns about the adoption of autonomous weapons that are not directed or monitored by humans, which are nearly within technological capability to be created and are the subject of international calls for appropriate supervision of and restraint in their development.

 

 

Black Mirror offers many interesting scenarios for analysis of and study by compliance and ethics professionals considering risk management related to the use of technology in organizations and society. As described above, surveillance, data privacy, consent, design ethics, autonomous weapons and other AI, bioethics, and cybersecurity are just a sampling of the issues invoked by episodes of the series.

Design ethics of addictive technology

As social media platforms, the internet of things, and other online networks advance in sophistication and prevalence, the line between engagement and addiction becomes ever thinner. Features which are designed to make browsing the internet or using connected devices more comfortable, intuitive, and pleasurable are also vulnerable to misuse and abuse which can have highly negative impact on people’s daily routines and lives.

Indeed, the stereotypes of people too engrossed in their phones or tablets to even notice the people around them are widespread and real. So much of social interaction has been carried over into online communities and takes place on social media or in internet comment sections and forums. The positive possibilities of this kind of access to information and collaboration are boundless. Connecting across continents and sharing all kinds of information and ideas is powerful for learning, cooperation, and creativity. Making these systems better and more efficient for users to engage with only further empowers these uses. Designers, engineers, and technologists have taken the positive responses from users and implemented that feedback in coming up with new features and improvements with the aim of making the user interface and experience better.

Whether it’s making screens balanced with vivid images that are easy on the eyes or implementing machine-learning based algorithms that fill users’ feeds with the most interesting and entertaining information tailored for them, the original aim of these innovations is to make the platform or device more interesting to use and therefore to encourage the user to spend more time on it. This has obvious commercial appeal to the companies that create these networks and devices, their advertisers, and their other partners who are all competing to attract people’s attention and gain valuable impressions or content views. Time is money, and a faithful user is a lucrative one.

However, those eyeballs content providers and marketers wish to attract are, of course, inside the heads of people and therefore the ever-ramping effort to engage those people runs into risky territory where interest or active participation edges into dependency and addiction. There are countless studies which have shown health problems stemming from overuse of phones, tablets, computers, and other devices, including eye fatigue, migraines, sleep deprivation, and other problems related to vision, concentration, or stress caused by overindulgence in looking at screens. This is not to mention the destructive social impact that over-immersion in devices can have, isolating people from their families and communities as well interrupting work, diminishing traditional communication skills, and exposing people to online abuse and other unsafe or inappropriate content that could cause harm.

In fact, some of the individuals who have had the loudest voices against the dark side of the advancements of personal technology are in fact the designers and engineers who had a hand in actually creating the most addictive features. For example, the engineer who was involved in creating the Facebook “Like” button and the designer who worked on the “pull to refresh” mechanism first used by Twitter are among a growing group of technologists who have started to question and reject the role that immersive technologies play in their lives. These individuals understand the good intentions that were behind the original creation of these technologies, with the hope to make them more useful or fun for users, but they also see the downsides. Coined “refuseniks,” these early adopters have purposefully made efforts to diminish or balance the presence of technology in their lives. As many of these addictive behaviors center around the use of smartphones and applications on them, many of these people who designed these features and now speak out against them turn off notifications, uninstall particularly time-wasting applications, and even distance themselves physically from their phones by following strict personal rules about usage or cutting off access after certain times or in specific places.

The question remains – pioneers of these features may have matured within their own careers and lives enough to realize that their earlier intentions have destructive potential they don’t want to indulge personally. But how will companies creating products and services in this space balance this as public attention begins to more commonly acknowledge the problematic nature of these features? Being a refusenik cannot be the answer for everyone, as these devices and platforms do bring great value to their users and the world as a whole, despite the negative effect they can frequently also have. Organizations working in this space can take advantage of corporate social responsibility values to balance their innovation of new features with the expectations of how consumers can use them, for good or bad.

On an individual level, it is very helpful to take personal responsibility to acknowledge and understand how these platforms and technologies are designed to make people engaged and how that can turn to addiction. Being conscious of these features or tendencies in their use is key. People should push themselves to understand why and how they use these technologies before adopting and engaging in them. If they feel prone to misuse of it, then understanding the cause of it and exposure to it will help to mitigate its effects.

For an interesting perspective on high-tech designers and technologists who have rejected the technologies they sometimes played pivotal roles in creating, check out this article from The Guardian.

Instagram and the internet’s code of ethics

Instagram is a very popular social media app based on sharing photos and videos, publicly and to selected users as well as via direct, private message. It was launched in 2010 and since April 2012 has been owned by Facebook, another giant in the social media industry. In less than the decade of its existence, Instagram has grown a very large and active community, where users can interact with their friends and “followers” as well as other communities who maintain a presence there, public figures, media sources, and corporate brands.

All of these wildly different groups, from all over the world, sharing content and commentary on one platform, is exciting and promises many opportunities for collaboration. Along with these positive connections, though, of course come negative surprises and possibilities for challenges and abuses. With all the influence Instagram has through its popularity comes also responsibility for defining the standards and limitations of the community as well as what it will put out into the internet and the world.

Instagram has faced its share of criticism for its efforts to implement and maintain effective controls and reporting mechanisms.   Instagram relies heavily on user reporting of inappropriate content, such as posts depicting illegal activity or the use of “coded” hashtags and emojis to conceal but continue on with such practices. Understandably, even the most aggressive attempts to keep up with the pace of this behavior on social media will fall behind quickly, leading to criticism the community is unsafe. When Instagram is too proactive or reaches in deleting comments, posts, or users, however, then controversy about overreaching into privacy and expression begins in response.

Kevin Systrom, one of the original creators of Instagram and its current CEO, wants to work this balance between protection from abuse and freedom of expression. Under his leadership, Instagram is dedicated to ensuring that the content and tone on the platform is compliant with its community guidelines. Changes to the comments sections on photos – including allowing users to filter out comments that had certain words, or to post photos without comment sections available – are intended to encourage safer self-expression by the posters who might otherwise fear harassment or offensive content in response below their photos.

Platforms such as Instagram, of course, can never be neutral – any technology’s relationship with its user is one that is fraught with moral concerns, starting right at the ethics of its design, which is made only more complex by algorithms, robot users, and the real users who make their own decisions about the content to share and promote that run the gamut from universally appropriate to offensive, harassing, or even illegal. In such a context, applying a code of ethics is a very hard task, but perhaps it is the inherent difficulty of doing this that makes it so important to try.

Creating filters and tools to hide and promote, prevent and engage, either when deployed by the community management behind the scenes or when elected by users, is just the beginning of the design choices engineers have made at Instagram to implement technical responses to problematic tone in some corners of the platform. Instagram tries to deploy artificial intelligence to help also, to sort real posts from fake and to learn from the data to understand why innocent comments or content may be abusive to the context, a concept called word embeddings. AI has its limitations, of course, but in any rules-based approach to governance it’s necessary to start with something good and then make continual efforts to make it better, rather than leave risks un-addressed while in hopeful pursuit of the best.

Time will tell how effective Instagram’s efforts to make the platform a safer place for expression really are, and what they really accomplish – a place which is open for creative sharing and communication creation, but not to toxicity and abuse, or a censored, sanitized, disingenuous photo collection where self-expression is restricted and speech censored? Perhaps Instagram will succeed in going against the tide on the internet and in much of life, where the level of social discourse seems to have gone low, tinged by anger and dark with people’s worst impulses, and make a place where the conversation can be a bit more civil, even if it has to be filtered first to get there.

For more detail on Kevin Systrom’s ambition of making Instagram a safe haven and role model platform on the internet, and the challenges that both motivate and complicate this mission, see Nicholas Thompson’s story on Wired.

Round-up on ethics of design in technology

One of the most interesting and challenging inquiries in the evolving ethical code of technology has to do with design choices. Ethical decision-making and process design has direct impact on the fluid, complex process of creating the devices, interfaces, and systems that are brought to market and used by consumers on a constant basis. In such a disruptive and innovative industry, there are moral costs for every design decision: every new creation replaces or changes an existing one, and for everyone who has new access or benefits, others experience the costs of these decisions. Therefore the ethics of design as applied to technology and, of particular interest, social media, have concrete importance for everyone living in a world increasingly dominated by user experiences, communities’ terms of service, and smart devices.

  • Former Google product manager Tristan Harris has gone viral with his commentary on the ethics of design in smart phones and platforms creating apps for them. There is a balance in online design where the internet platforms go from being useful or intuitive to encouraging interruption and even obsession. Many people worry about the effect “screen time” may have on their attention span, quality of sleep, and offline interactions with people. Design techniques may actually keep people attached to their devices in a constant loop of advertisements, notifications, and links, as content providers and platforms compete to grab viewers’ attention. Alerting people to the control their devices have over their attention and time is one step, but urging more ethical choices in the design process is the next frontier for innovation reform:  Our Minds Have Been Hijacked By Our Phones.  Tristan Harris Wants To Rescue Them. 
  • The above phenomenon of addictive design has become so imbedded in the creation of app features that even the most subtle changes can have a huge impact on the consumption practices of users. But when do features go from entertaining and user-friendly to compulsive, even addictive? Refreshing an app can be like pulling the lever on a slot machine, giving the brain rewards in the form of new content to keep the loop going at the expense of other activities and priorities. These design improvements, then, may actually affect users more as manipulations:  Designers are using “dark UX” to turn you into a sleep-deprived internet addict
  • These small, ongoing redesigns are intended to make apps more readable and consumable. These periodic improvements are intended to make content more captivating and enable longer browsing – again prompting the question, what is the ethical code for the control designers wield over users with these choices? From a design ethics perspective, these small changes can be viewed as more alarming than major ones, as they are so incremental that many users do not consciously notice them and therefore “optimization” tips into “over-optimization,” meaningful interaction becoming possibly destructive:  Facebook and Instagram get redesigns for readability
  • Artificial intelligence always captures the public’s imagination – thrills and fears about the possible developing capabilities of robots and predictive algorithms that could direct and define – and perhaps threaten – human existence in the future. AI has been developing in recent years at a breakneck pace, and all indications are that this innovation will continue or multiply in the coming period. The science fiction-esque impact of AI on society will grow and bring with it all kinds of ethical concerns about the abilities of humans to define and control it in a timely and effective way:  Ethics — the next frontier for artificial intelligence
  • Social media platforms have developed into social systems, with all the dilemmas and dynamics that come along with that. These networks may face the choice between engagement and all of the thorny dialogs that come with it, and a simpler, more remote model that can be enjoyable but is less interactive and therefore, perhaps, less provocative:  ‘Link in Bio’ Keeps Instagram Nice

Queries into design ethics and choice theory in technology, especially social media, ask the questions of what human experience will evolve into in a world which is increasingly digitized and networked. The design decisions made in the creation of these devices and systems require an ethical code and a sense of social responsibility in order to define the boundaries of what are the best collective choices.