Practical insights for compliance and ethics professionals and commentary on the intersection of compliance and culture.

Round-up on compliance issues with online platforms: Instagram

This is the third in a series of six posts on compliance issues with various online platforms.  The first post was about YouTube.  Last week’s post was about Facebook.  Today’s post, the third in the series, will discuss Instagram.  Next week’s post will focus on Twitter.  On April 5, the fifth post in the series will cover Snapchat.  The sixth and last post in the series, on April 12, will be about Reddit.

The photo and video sharing and social media service Instagram was created in 2010 and has been owned by Facebook since 2012. Instagram has evolved massively in popularity, adding users exponentially year after year, and creating features which have inspired huge engagement and imitation on other platforms to take advantage of popular usage of the app, such as thematic hashtags and aesthetically curated content.

READ MORE

Round-up on compliance issues with online platforms: Facebook

This is the second in a series of six posts on compliance issues with various online platforms.  Last week’s post was about YouTube.  Today’s post will be about Facebook.  Next week’s post will discuss Instagram.  The fourth post in the series, on March 29, will focus on Twitter.  The fifth post, on April 5, will be about Snapchat.  On April 12, the sixth and final post in the series will discuss Reddit.

The online social media site Facebook was created in 2004 and in the following years has become one of the most well-known online platforms. Facebook was originally created as a social networking service by and for Harvard University students and then expanded to the broader Ivy League and then general university community before opening up in 2006 to all users who meet the local minimum age requirement.  Since 2012, Facebook has been publicly-listed on the NASDAQ stock exchange.

Facebook’s rise to extreme popularity coincided with the disruptive innovations in Internet-enabled devices other than traditional computers, such as smartphones and tablets. Therefore as the site grew its user base it became an immersive and highly-engaging platform for people to share a wide variety of personal information, partake in social interactions, upload media such as photos or videos, and participate in community-based activities organized by profession, background, and interests.

READ MORE

Round-up on the humanity of artificial intelligence

Human fascination in, and even obsession with, robots is nothing new. For many years people have imagined distant versions of the future where human interaction with different types of robots, androids, or other robotics products was a routine part of life both at work and at home. Sometimes these forward-looking scenarios focus on convenience, service, and speed. Much more often, however, when asked to contemplate a future with ubiquitous artificial intelligence (AI) technology imbedded alongside humans, thoughts stray into possible troubling or dark impacts on society. People worry about loss of humanity as technology predominates, or the possibility that robots could be misused or even gain sentience and have intentions to work against or harm humans.

In the past these scenarios, both of the positive advancement of society and of the potential for isolating, dangerous dystopia, were mostly relegated to science fiction books, Hollywood blockbuster movies, or what were seen as overactive imaginations or paranoid opinions of luddites. Now, however, the news is full every day of developments in AI technology that bring the once-imaginary potential of robots ever closer to present reality.

As technologists and business organizations consider the utility of advancement in AI, ethicists and corporate compliance programs must also consider the risk management issues that come along with robots and robotics. Technology which will have such a broad and deep impact on human life must be anticipated with thoughtful planning for the compliance risks which can arise. In particular the potential for sharing human traits with AI technology or imbedding AI technology in place of human judgment present provocative challenges.

  • Anticipating increased interactions with androids – robots that look like humans and can speak, walk, and otherwise “act” like humans would – leads to the logical question of will humans have relationships with androids and vice versa? This would be not just transactional interactions like giving and receiving directions, or speaking back and forth on a script written to take advantage of or increase machine learning within the android. Rather, this could be intimate, emotionally-significant exchanges that build real connections. How can this be when only one side of the equation – the human – is assumed to be able to feel and think freely? While technical production of robots that appear credibly human-like is still beyond the reach of current science, and giving them a compelling human presence that could fool or attract a human is even further away, work on these tasks is well underway and it is not unreasonable to consider possible consequences of these developments. Will humans feel empathy and other emotions for androids? Can people ever trust robots that seem to be, but aren’t, people? Will the lines between “us” and “them” blur? The burgeoning field of human-robot interaction research seeks to answer these questions and develop technology which responds to and considers these tensions.  Love in the Time of Robots 
  • On a similar note, when could machine learning become machine consciousness? Humans have embraced the usefulness of AI technologies which become smarter and more effective over time after they are exposed to more knowledge and experience. This is a great argument for deploying technology to support and improve efficiency and productivity. Everyone wants computers, networked devices, and other products that use advanced technology to work more accurately and easily. Machine consciousness, however, suggests independent sentience or judgment abilities, the potential of which unsettle humans. From a compliance and ethics perspective there is an extra curiosity inherent in this – what will be the morality of these machines if they achieve consciousness? Will they have a reliable code of ethics from which they do not stray and which comports with human societal expectations? Will they struggle with ethical decision-making and frameworks like humans do? Or will human and human-like practical ethics diverge completely?  Can Robots be Conscious? 
  • In 2016, David Hanson of Hanson Robotics created a humanoid robot named Sophia. At his prompting during a live demonstration at the SXSW festival, Sophia answered his question “Do you want to destroy humans?… Please say ‘no’” by saying, “OK. I will destroy humans.” Despite this somewhat alarming declaration, during the demonstration Sophia also said that she was essentially an input-output system, and therefore would treat humans the way humans treated her. The intended purpose of Sophia and future robots like her is to provide assistance in patient care at assisted living facilities and in visitor services at parks and events. In October 2017, Saudi Arabia recognized the potential of the AI technology which makes Sophia possible by granting her citizenship ahead of its Future Investment Initiative event. A robot that once said it would ‘destroy humans’ just became a robot citizen in Saudi Arabia
  • The development of humanoid robots will certainly become a bioethics issue in the future as the technology to take the human traits further becomes within reach. While there are so many compelling cases for how highly advanced AI could be good for the world, the risks of making them somehow too human will always be evocative and concerning to people. The gap between humans and human-like androids is called the uncanny valley, the space between organic and inorganic, natural and artificial, cognitive and learned. The suggestion that the future of human evolution could be “synthetic” – aided by or facilitated in the development androids and other robotics – presents a fascinating challenge to bioethics. Are humanoid robots objects or devices like computers or phones? It is necessary to consider the humans and androids in comparison to one other just as it is humans and animals, for example. This ethical dilemma gets to the root of what the literal meaning or definition of life is and what it takes for someone, or something, to be considered alive. Six Life-Like Robots That Prove The Future of Human Evolution is Synthetic
  • One of the potential uses of AI technology which worries people the most is in autonomous weapons. The technology in fact already exists for weapons which can be used against people without human intervention or supervision in deploying them. Militaries around the world have been quick to develop and adopt weapon technology that uses remote computing techniques to fly, drive, patrol, and track. However, this established use of this technology is either for non-weaponized purposes or, in the case of drones, deployment of weapons with a human controller. Fully automating this technology would in effect be giving AI-powered machines the decision-making ability that could lead to killing humans. Many technologists and academics are warning governments to consider preventing large-scale manufacturing of these weapons via pre-emptive treaty or other international law.  Ban on killer robots urgently needed, say scientists

As the diverse selection of stories above illustrates, the reach of robots, robotics, androids, and other developments within AI technology are certain to permeate and indeed redefine human life. This will not be in the distant or unperceived future. Rather, real impact from these advancements is even already starting to be seen, and there is only more to come. Governments, organizations, and individuals must make diligent risk assessment preparations to integrate this technology with human life in a harmonious and sustainable fashion.

READ MORE

Selected TED/TEDx talks on self-driving cars

In a follow-up to yesterday’s post on current compliance trends in the emerging autonomous vehicle technology industry, the below is a collection of videos from TED and TEDx talks about self-driving cars. The possibilities of this technology at this point, its infancy, seem almost infinite. The impact autonomous cars could have on modern society and culture are fascinating to contemplate; it seems like this technology could disrupt and indeed improve people’s lives in many ways.

First, a primer on the technical basics of the self-driving car systems that are under development now, and the machine learning and artificial intelligence technology that will be imperative to make it practical and affordable, from Self-Driving Cars of The Near Future (Raquel Urtasun).

Of course, along with the tremendous potential of this autonomous vehicle technology also comes risks and decisions that must be carefully and thoughtfully made with compliance and ethics considerations in mind. In developing a technology that will have such a wide-reaching impact on so many people, both those who use it and those who do not personally do so, it is critically important to have in mind from the beginning all the interests concerned and how those might be conflicting or impacted.

  • Autonomous ride toward a new reality (Limmor Kfiri) – The benefits of self-driving cars must be taken alongside the issues and ethical dilemmas they prompt. In considering these challenges – which include, for example, cybersecurity risk in the possibility that someone could remotely hack a car’s self-driving technology system and take over control of the steering or brakes from the human inside it – creative approaches for handling the problems without stifling the technology are necessary. Governments and individuals who are involving in the designing phase can have a huge impact from the beginning in this effort.

 

  • The Overlooked Secret Behind Driverless Cars (Priscilla Nagashima Boyd) – There are many very practical problems of driving that technologists hope self-driving vehicles can help to address. For example, which route to select for the best commute or where to find a parking spot are all decisions people must make when driving now that semi-autonomous or autonomous driving systems could take care of in the future. However, with these conveniences there are some serious potential effects to privacy. People must ask themselves whether they are comfortable with location sharing, for example, something which has been an uncomfortable subject for some with social media or smartphone apps already. This may require a change in attitude and expectations toward privacy, and a heightened trust in technology, that during this time of cybersecurity breaches and leaks, some people are not so eager to normalize.

 

  • What’s the perfect driverless car? It depends on who you ask (Ryan Jenkins) – Design ethics and artificial intelligence meet in the development of the technology for autonomous vehicles. Technologies which can so deeply impact human life – such as smartphones, software algorithms, and indeed self-driving cars – bring with them many moral questions about what the character of and oversight on that impact might be. Any technology which can transform the way people live can do so helpfully or harmfully. Therefore, designers, engineers, lawmakers, and compliance and ethics professionals must collaborate to ensure that autonomous vehicles are produced so that they will meaningfully and positively shape human lives.

 

  • Are we ready for driverless cars? (Lauren Isaac) – Maybe the technology for driverless cars is great, but what if humans are the ones who are not ready? Like all systems, it can be designed with all the necessary controls and considerations in mind to make it as safe as possible, but if people do not use it appropriately or with good intentions then everything can go wrong. If people are not prepared to share with each other as well as redefine some of their inflexible ideas about ownership and control, then the technology will struggle to succeed in its bolder ambitions for society as a collective. Lawmakers and regulators can intervene early to ensure the philosophical intention of the driverless vehicle includes that people are safe and their interests are served, rather than neglected or abused, by the technology.

 

  • Are we ready for the self-driving car? (Tyron Louw) – While the previous lecture addresses people’s behavioral capability to handle self-driving car technology, in their attitudes and their openness to change and responsibility, this one focuses on people’s performance capacity. People are often frustrated when their laptops freeze or their phones have a dead battery – how will they react in the moment if a self-driving car has a technical malfunction? How can driverless vehicles be designed to take into account the possibility that the unsafe part of a self-driving car is the human driver in or near it?

 

The potential of the technology for autonomous vehicles, as expressed in these lectures and many others, is so striking, that it would be an inexcusable loss to not manage its growth and advancement in a way that ensures its sustainability. In the absence of regulatory action, and with tremendous respect for and power to the unchecked ambition of innovation, organizations and individuals working in this space must takes a value-based approach to developing, testing, and launching this technology. This way, its risks and challenges can be properly controlled against, and its greatness can be realized.

READ MORE

Round-up on compliance issues with self-driving cars

The science fiction world of the future is in active development. Projects involving artificial intelligence are on the forefront of the business strategy of many Silicon Valley technology companies and the venture capital firms that finance them, as well as traditional automotive companies and electronics manufacturers. Advancements in automation are the focus of major investments by these organizations, all of which hope to stake a competitive claim in this disruptive market.

Artificial intelligence innovations and specifically those involved automation do include robots and computer-generated personas serving functions ranging from assistants to recruiters to reservationists like the writers of earlier decades once imagined. However, one of the more practical applications of this emerging technology is in the transportation industry. Self-driving cars offer fascinating efficiency and improvement possibilities for a world that is increasingly urbanized. Organizations working in the self-driving cars industry all hope to address the constant dilemmas within the automotive industry – design and production safety, environmental sustainability, distracted driving, how to handle congestion and commuting.

Of course, as this advanced technology develops, obvious compliance and ethics considerations emerge. Consumer protection, safety and privacy, design ethics, and regulatory response are all challenges which business interests in the self-driving car industry must confront. one of the Many of the challenges of modern society in general are writ large in the world of higher education.

  • One of the first questions that comes up in any discussion about autonomous vehicles is of public relations. How will people – both other drivers and pedestrians – react to seeing a car with no driver behind the wheel? Will this be a distraction in and of itself? Virginia Tech and Ford tested this recently by sending out a fake self-driving car onto the streets of Arlington County. This car was intended to look like it had no driver, as an autonomous vehicle would, but in reality, there was a driver “dressed” as a car seat, complete with a face mask, in a specially-configured seating area. Such studies should help to determine the best design for autonomous vehicles taking in considerations of their surroundings, as well as to give ideas of what indications need to be provided outside of the vehicle to let people know what it is:  “Driverless van” is just a VT researcher in a really good driver’s seat costume
  • Ford is far from the only corporate giant interested in self-driving cars. From the consumer electronics sector, Samsung has made a major investment of money and resources with a dedicated business unit to developing autonomous technology. Samsung would like to compete with startups already working in this space, such as Mobileye, which is partnered with major automotive companies including BMW and Fiat Chrysler. Samsung acquired Harman, a major audio technology company, last year toward preparing for this effort. This work will be done in California, which has been granting self-driving permits via its Department of Motor Vehicles rather aggressively. Removing regulatory and administrative hurdles that might have prevent granting the permits has given California a leg-up in attracting businesses which hope to exploit this growing market:  Samsung makes a $300 million push into self-driving cars
  • Like the California DMV, the federal Department of Transportation has been quick to provide guidance on autonomous vehicles so that development and testing for the technology can proceed expediently. These guidelines are recommended but not mandatory and suggest fewer restrictions in the development process, hoping to facilitate innovations and advancements by manufacturers in a technology which is seen as positively disruptive for public safety and access to mobility. The DOT plans to have an evolving approach to addressing automated driving technology as the industry develops, indicating that the government wants the industry to take the lead in setting its agenda:    Department Of Transportation Rolls Out New Guidelines For Self-Driving Cars
  • In general, this deregulatory agenda seems likely to rule the day in the autonomous driving business, at least for now. Federal safety regulators will take a hands-off approach for the time being, deferring to the objections of organizations developing the technology, especially with regards to a proposed requirement that the National Highway Traffic Safety Administration would have had the ability to approve or reject autonomous vehicle systems before they were offered for sale. A light regulatory touch has been deemed the way forward in order to support what is seen as a transformative technology. Rather than legislate and establish oversight and review standards from the beginning, in this instance lawmakers and regulators have chosen to let the technology lead the way and presumably will intervene when development and testing leads to actually using and selling the vehicle systems in consumer and public applications:  Trump’s Regulators Ease the Path for Self-Driving Cars
  • On the same day that the deregulatory posture of the DOT and NHTSA was announced, the National Transportation Safety Board, an independent federal entity that investigates plane, train, and vehicle accidents, announced that a manufacturer was partially to blame for a car accident involving semi-autonomous driving technology. In this case, a motorist died in a high way accident using Tesla’s Autopilot feature, which handles steering and speed when engaged. In the accident, the Tesla crashed into a truck that entered its lane without the Autopilot system recognizing it. In its own investigation, the NHTSA laid the blame for the accident on human error, saying that the driver should have been monitoring the car despite having the feature engaged. The NTSB however, said that the Autopilot system had insufficient system controls to prevent the accident. As autonomous vehicles make their debut on the road, and semi-autonomous vehicles become even more widespread, it is very important for consumer safety and protection that this control framework is considered in the design and manufacturing process to protect against insufficient monitoring by drivers or abuse of the system, however possible:  Tesla Bears Some Blame for Self-Driving Crash Death, Feds Say   

Check back tomorrow for a companion post to this round-up: selected TED/TEDx talks on self-driving cars and what autonomous vehicles may mean for individuals, organizations, and society.

READ MORE

Instagram and the internet’s code of ethics

Instagram is a very popular social media app based on sharing photos and videos, publicly and to selected users as well as via direct, private message. It was launched in 2010 and since April 2012 has been owned by Facebook, another giant in the social media industry. In less than the decade of its existence, Instagram has grown a very large and active community, where users can interact with their friends and “followers” as well as other communities who maintain a presence there, public figures, media sources, and corporate brands.

All of these wildly different groups, from all over the world, sharing content and commentary on one platform, is exciting and promises many opportunities for collaboration. Along with these positive connections, though, of course come negative surprises and possibilities for challenges and abuses. With all the influence Instagram has through its popularity comes also responsibility for defining the standards and limitations of the community as well as what it will put out into the internet and the world.

Instagram has faced its share of criticism for its efforts to implement and maintain effective controls and reporting mechanisms.   Instagram relies heavily on user reporting of inappropriate content, such as posts depicting illegal activity or the use of “coded” hashtags and emojis to conceal but continue on with such practices. Understandably, even the most aggressive attempts to keep up with the pace of this behavior on social media will fall behind quickly, leading to criticism the community is unsafe. When Instagram is too proactive or reaches in deleting comments, posts, or users, however, then controversy about overreaching into privacy and expression begins in response.

Kevin Systrom, one of the original creators of Instagram and its current CEO, wants to work this balance between protection from abuse and freedom of expression. Under his leadership, Instagram is dedicated to ensuring that the content and tone on the platform is compliant with its community guidelines. Changes to the comments sections on photos – including allowing users to filter out comments that had certain words, or to post photos without comment sections available – are intended to encourage safer self-expression by the posters who might otherwise fear harassment or offensive content in response below their photos.

Platforms such as Instagram, of course, can never be neutral – any technology’s relationship with its user is one that is fraught with moral concerns, starting right at the ethics of its design, which is made only more complex by algorithms, robot users, and the real users who make their own decisions about the content to share and promote that run the gamut from universally appropriate to offensive, harassing, or even illegal. In such a context, applying a code of ethics is a very hard task, but perhaps it is the inherent difficulty of doing this that makes it so important to try.

Creating filters and tools to hide and promote, prevent and engage, either when deployed by the community management behind the scenes or when elected by users, is just the beginning of the design choices engineers have made at Instagram to implement technical responses to problematic tone in some corners of the platform. Instagram tries to deploy artificial intelligence to help also, to sort real posts from fake and to learn from the data to understand why innocent comments or content may be abusive to the context, a concept called word embeddings. AI has its limitations, of course, but in any rules-based approach to governance it’s necessary to start with something good and then make continual efforts to make it better, rather than leave risks un-addressed while in hopeful pursuit of the best.

Time will tell how effective Instagram’s efforts to make the platform a safer place for expression really are, and what they really accomplish – a place which is open for creative sharing and communication creation, but not to toxicity and abuse, or a censored, sanitized, disingenuous photo collection where self-expression is restricted and speech censored? Perhaps Instagram will succeed in going against the tide on the internet and in much of life, where the level of social discourse seems to have gone low, tinged by anger and dark with people’s worst impulses, and make a place where the conversation can be a bit more civil, even if it has to be filtered first to get there.

For more detail on Kevin Systrom’s ambition of making Instagram a safe haven and role model platform on the internet, and the challenges that both motivate and complicate this mission, see Nicholas Thompson’s story on Wired.

READ MORE

Round-up on ethics of design in technology

One of the most interesting and challenging inquiries in the evolving ethical code of technology has to do with design choices. Ethical decision-making and process design has direct impact on the fluid, complex process of creating the devices, interfaces, and systems that are brought to market and used by consumers on a constant basis. In such a disruptive and innovative industry, there are moral costs for every design decision: every new creation replaces or changes an existing one, and for everyone who has new access or benefits, others experience the costs of these decisions. Therefore the ethics of design as applied to technology and, of particular interest, social media, have concrete importance for everyone living in a world increasingly dominated by user experiences, communities’ terms of service, and smart devices.

  • Former Google product manager Tristan Harris has gone viral with his commentary on the ethics of design in smart phones and platforms creating apps for them. There is a balance in online design where the internet platforms go from being useful or intuitive to encouraging interruption and even obsession. Many people worry about the effect “screen time” may have on their attention span, quality of sleep, and offline interactions with people. Design techniques may actually keep people attached to their devices in a constant loop of advertisements, notifications, and links, as content providers and platforms compete to grab viewers’ attention. Alerting people to the control their devices have over their attention and time is one step, but urging more ethical choices in the design process is the next frontier for innovation reform:  Our Minds Have Been Hijacked By Our Phones.  Tristan Harris Wants To Rescue Them. 
  • The above phenomenon of addictive design has become so imbedded in the creation of app features that even the most subtle changes can have a huge impact on the consumption practices of users. But when do features go from entertaining and user-friendly to compulsive, even addictive? Refreshing an app can be like pulling the lever on a slot machine, giving the brain rewards in the form of new content to keep the loop going at the expense of other activities and priorities. These design improvements, then, may actually affect users more as manipulations:  Designers are using “dark UX” to turn you into a sleep-deprived internet addict
  • These small, ongoing redesigns are intended to make apps more readable and consumable. These periodic improvements are intended to make content more captivating and enable longer browsing – again prompting the question, what is the ethical code for the control designers wield over users with these choices? From a design ethics perspective, these small changes can be viewed as more alarming than major ones, as they are so incremental that many users do not consciously notice them and therefore “optimization” tips into “over-optimization,” meaningful interaction becoming possibly destructive:  Facebook and Instagram get redesigns for readability
  • Artificial intelligence always captures the public’s imagination – thrills and fears about the possible developing capabilities of robots and predictive algorithms that could direct and define – and perhaps threaten – human existence in the future. AI has been developing in recent years at a breakneck pace, and all indications are that this innovation will continue or multiply in the coming period. The science fiction-esque impact of AI on society will grow and bring with it all kinds of ethical concerns about the abilities of humans to define and control it in a timely and effective way:  Ethics — the next frontier for artificial intelligence
  • Social media platforms have developed into social systems, with all the dilemmas and dynamics that come along with that. These networks may face the choice between engagement and all of the thorny dialogs that come with it, and a simpler, more remote model that can be enjoyable but is less interactive and therefore, perhaps, less provocative:  ‘Link in Bio’ Keeps Instagram Nice

Queries into design ethics and choice theory in technology, especially social media, ask the questions of what human experience will evolve into in a world which is increasingly digitized and networked. The design decisions made in the creation of these devices and systems require an ethical code and a sense of social responsibility in order to define the boundaries of what are the best collective choices.

READ MORE