Practical insights for compliance and ethics professionals and commentary on the intersection of compliance and culture.

Round-up on compliance issues with online platforms: Facebook

This is the second in a series of six posts on compliance issues with various online platforms.  Last week’s post was about YouTube.  Today’s post will be about Facebook.  Next week’s post will discuss Instagram.  The fourth post in the series, on March 29, will focus on Twitter.  The fifth post, on April 5, will be about Snapchat.  On April 12, the sixth and final post in the series will discuss Reddit.

The online social media site Facebook was created in 2004 and in the following years has become one of the most well-known online platforms. Facebook was originally created as a social networking service by and for Harvard University students and then expanded to the broader Ivy League and then general university community before opening up in 2006 to all users who meet the local minimum age requirement.  Since 2012, Facebook has been publicly-listed on the NASDAQ stock exchange.

Facebook’s rise to extreme popularity coincided with the disruptive innovations in Internet-enabled devices other than traditional computers, such as smartphones and tablets. Therefore as the site grew its user base it became an immersive and highly-engaging platform for people to share a wide variety of personal information, partake in social interactions, upload media such as photos or videos, and participate in community-based activities organized by profession, background, and interests.

Because Facebook is one of the most prominent companies in the social media field, the technology industry, and by now also the world in general, it has been the subject of comprehensive media coverage and commentary.  This is especially true with highlights on its more controversial aspects and practices. Privacy, data collection and usage, dissemination of false or abusive content, and the social and psychological impact of addictive technology features on users are just some of the issues on which Facebook has been criticized, challenged, and pressured toward taking action to repair its practices and standards.  On all of these topics and more, there are many ongoing developments with regards to Facebook’s activities that are interesting from a compliance and ethics point of view.

  • WhatsApp, the mobile messaging service which was acquired by Facebook in 2014, is very popular in Europe and therefore with the oncoming implementation of the new EU privacy regulation, General Data Protection Regulation (for more on GDPR, check out this post), there will be new supervisory scrutiny turned toward data sharing from WhatsApp to its parent Facebook: UK ICO rules WhatsApp sharing data with Facebook would be illegal
  • One of the highest priorities for Facebook, like any major platform on the internet, is to ensure protective cybersecurity controls. Facebook and its various ancillary services are the frequent targets of various types of cyberattacks. Its own malware detection attempts, however, have proven, contrary to their intentions, to be misleading or even dangerous for users and the security of their data: Facebook’s Mandatory Malware Scan Is an Intrusive Mess 
  • Facebook has been frequently portrayed as the posterchild for addictive technology. Many of the features which are named as being the most psychologically absorbing and therefore potentially unhealthy or irresponsible for users were pioneered on Facebook. Therefore social responsibility of the designers, engineers and technologists behind the company’s innovation developments has been frequently questioned (for more on design ethics, check out this post, and addictive technology, including the trend of “refuseniks,” check out this post). Design ethics must make a response to evidence of user engagement which tips over into dependency: Venture capitalist Alan Patricof calls Facebook and Google addictive, “virtual monopolies”
  • Corporate social responsibility, or CSR, has become increasingly popular for companies in a variety of industries to embrace as a source of corporate identity and engagement with customers/users (for more on the way companies may use CSR as an expression of corporate culture and values, check out this post). While Facebook founder Mark Zuckerberg is personally philanthropic, Facebook as a corporate identity has often struggled to participate meaningfully or authentically in social movements or to make a lasting stance as an organization on political and cultural issues. Facebook’s progress in this area seems to be a dance of one step forward and two steps back as it often attempts to engage and then makes costly blunders or is accused of irresponsible practices: Zuckerberg says Facebook doesn’t focus on charity because it’s already doing plenty of good
  • Like YouTube, Facebook has committed to using more human monitoring of content shared on the site in order to combat the ineffective or inaccurate results that artificial intelligence may produce. Content moderators check enormous volume of photos, videos, posts, profiles, and groups that have been flagged for review by either users or the site’s algorithms in order to determine whether the content should be removed due to its false, abusive, offensive, or violent nature. These moderation jobs are challenging, demoralizing, and as anecdotal evidence suggests, the people who do them may not be adequately trained for or protected from the hardship and distress of the role: Underpaid and overburdened: the life of a Facebook moderator

Check back next Thursday for the next post in this series, which will focus on Instagram.

Leave a Reply