As social media platforms, the internet of things, and other online networks advance in sophistication and prevalence, the line between engagement and addiction becomes ever thinner. Features which are designed to make browsing the internet or using connected devices more comfortable, intuitive, and pleasurable are also vulnerable to misuse and abuse which can have highly negative impact on people’s daily routines and lives.
Indeed, the stereotypes of people too engrossed in their phones or tablets to even notice the people around them are widespread and real. So much of social interaction has been carried over into online communities and takes place on social media or in internet comment sections and forums. The positive possibilities of this kind of access to information and collaboration are boundless. Connecting across continents and sharing all kinds of information and ideas is powerful for learning, cooperation, and creativity. Making these systems better and more efficient for users to engage with only further empowers these uses. Designers, engineers, and technologists have taken the positive responses from users and implemented that feedback in coming up with new features and improvements with the aim of making the user interface and experience better.
Whether it’s making screens balanced with vivid images that are easy on the eyes or implementing machine-learning based algorithms that fill users’ feeds with the most interesting and entertaining information tailored for them, the original aim of these innovations is to make the platform or device more interesting to use and therefore to encourage the user to spend more time on it. This has obvious commercial appeal to the companies that create these networks and devices, their advertisers, and their other partners who are all competing to attract people’s attention and gain valuable impressions or content views. Time is money, and a faithful user is a lucrative one.
However, those eyeballs content providers and marketers wish to attract are, of course, inside the heads of people and therefore the ever-ramping effort to engage those people runs into risky territory where interest or active participation edges into dependency and addiction. There are countless studies which have shown health problems stemming from overuse of phones, tablets, computers, and other devices, including eye fatigue, migraines, sleep deprivation, and other problems related to vision, concentration, or stress caused by overindulgence in looking at screens. This is not to mention the destructive social impact that over-immersion in devices can have, isolating people from their families and communities as well interrupting work, diminishing traditional communication skills, and exposing people to online abuse and other unsafe or inappropriate content that could cause harm.
In fact, some of the individuals who have had the loudest voices against the dark side of the advancements of personal technology are in fact the designers and engineers who had a hand in actually creating the most addictive features. For example, the engineer who was involved in creating the Facebook “Like” button and the designer who worked on the “pull to refresh” mechanism first used by Twitter are among a growing group of technologists who have started to question and reject the role that immersive technologies play in their lives. These individuals understand the good intentions that were behind the original creation of these technologies, with the hope to make them more useful or fun for users, but they also see the downsides. Coined “refuseniks,” these early adopters have purposefully made efforts to diminish or balance the presence of technology in their lives. As many of these addictive behaviors center around the use of smartphones and applications on them, many of these people who designed these features and now speak out against them turn off notifications, uninstall particularly time-wasting applications, and even distance themselves physically from their phones by following strict personal rules about usage or cutting off access after certain times or in specific places.
The question remains – pioneers of these features may have matured within their own careers and lives enough to realize that their earlier intentions have destructive potential they don’t want to indulge personally. But how will companies creating products and services in this space balance this as public attention begins to more commonly acknowledge the problematic nature of these features? Being a refusenik cannot be the answer for everyone, as these devices and platforms do bring great value to their users and the world as a whole, despite the negative effect they can frequently also have. Organizations working in this space can take advantage of corporate social responsibility values to balance their innovation of new features with the expectations of how consumers can use them, for good or bad.
On an individual level, it is very helpful to take personal responsibility to acknowledge and understand how these platforms and technologies are designed to make people engaged and how that can turn to addiction. Being conscious of these features or tendencies in their use is key. People should push themselves to understand why and how they use these technologies before adopting and engaging in them. If they feel prone to misuse of it, then understanding the cause of it and exposure to it will help to mitigate its effects.
For an interesting perspective on high-tech designers and technologists who have rejected the technologies they sometimes played pivotal roles in creating, check out this article from The Guardian.