When social platforms were introduced in 2009, their main purpose was to connect people based on common interests. Sharing life’s moments and interests was supposed to result in positive exchanges that in turn enhanced relationships. The allure of easy and free platforms in exchange for advertising and use of our data seemed like a simple bargain. To protect themselves from liability and censorship, social media companies stated that their platforms were not responsible for posted content.
Netflix’s new documentary “The Social Dilemma” explores the current dysfunction of social media platforms and the growing dystopian world they project. Instigators and luminaries such as Tristan Harris, Jaron Lanier, and Cathy O’Neil describe their roles in creating what they see as some of the many current problems with social media.
Social media is all about engagement expressed through increased motivations to contribute by continually sharing one’s thoughts. Coupled with sharing is expanding one’s social networks – which is key to the power of social platforms. For example, influencers who have high social status connect ideas, products and services leveraging their reputations to shape conversations. In many cases these conversations are monetized manipulations.
Social media platform participants are increasingly premeditated in sharing content for distorted purposes. Many now intentionally pretend to be someone they are not in order to increase their online social status or intentionally mislead others. An example of this is “catfishing” a practice in which someone presents themselves as a fictional online personality — or hijacks an existing profile — to manipulate and betray victims with the intention of doing them emotional or financial harm. The MTV show “Catfish” exposed this dysfunction.
Manipulation expands further when individuals purposely create conspiracy theories that amplify people’s fears and vulnerabilities. QAnon is an extreme example of this type of manipulation. Their premise is that “a cabal of Satan-worshiping pedophiles are running a global child sex-trafficking ring and plotting against President Donald Trump will lead to a ‘day of reckoning’ involving the mass arrest of journalists and politicians.” In less incendiary times, QAnon would have been considered preposterous by most people. However, in our current climate of political crisis, their rhetoric has grown into a distorted movement that may have a significant impact on who will lead the most powerful nation on Earth.
What was not entirely clear in the documentary but highlighted on the film’s website is the actual dilemma: “Never before have a handful of tech designers had such control over the way billions of us think, act, and live our lives.” The documentary explores how social media went from sharing to manipulating to now relying on artificial intelligence to create more sophisticated personalized experiences but ended up driving social dysfunctions instead.
The result is that the more we use and rely on social media through the hyper-connectivity of the “attention economy” the more power we give it to drive us apart.
The rise of machine learning and customized content
Algorithms and automation are making social media platforms smarter and more powerful. They are also designed to exploit weaknesses in human psychology and support aggressive business goals supported by advertising. Unfortunately these algorithms are “black box” and becoming too complex for humans to fully understand as they learn and modify their purpose. Cathy O’Neil rightly states that “algorithms are opinions dedicated in code.” Hardwired in by human developers, these opinions are capable of digitizing and normalizing existing racial, social and gender biases.
Algorithm-driven realities have given people less control of who they are and what they believe. Social platforms are now serving up dysfunctional realities that continually reinforce or modify our behaviors. Increasingly unaccountable systems feed individual personalized content streams and become an eco chamber based on individual online habits. Manipulations are amplified by algorithms which are unable to differentiate honesty from manipulation, as machine learning cannot yet decipher intent. However, these formulas do understand subtle behavioral changes that infer new information about users.
One of the largest impacts are “societal truths” which used to be based on shared experiences and understandings have now been replaced by individualized realities. Roger McNamee, an investor in technology companies, stated that the result in using these platforms has turned each of us into a “Truman Show” in such a way that individual users have become the product to companies and they are productizing our behaviors.
Thomas Friedman of the New York Times summarized the current dangers of our social media landscape
“I worry because Facebook and Twitter have become giant engines for destroying the two pillars of our democracy — truth and trust. Yes, these social networks have given voice to the voiceless. That is a good thing and it can really enhance transparency. But they have also become huge, unedited cesspools of conspiracy theories that are circulated and believed by a shocking — and growing — number of people. . . These social networks are destroying our nation’s cognitive immunity — its ability to sort truth from falsehood.”
Designers have contributed to the social dilemma, but are not the cause of it
In the film, former leaders of Facebook, Twitter, Google, and Instagram discuss their desire to create cool applications but admit that they were blindsided by the resulting negative effects of their deployed ideas on human populations.
For example, the documentary features former Google and Facebook engineer Justin Rosenstein, who is credited with developing Facebook’s iconic “Like” button. His team’s initial intention was for people to quickly invest in a post by “liking” it. However, when they actually deployed the icon, users immediately began to abuse it. He was shocked that the unintended consequences resulted in deleterious effects such as emotional distress. After seeing these effects, many former Silicon Valley execs are not allowing their children to have access to digital devices or platforms at an early age.
Companies want to create “sticky” applications that incentivize users to spend more time using them. Designers use human-centered design principles and methods to directly engage with and learn about users. They apply insights to create user-centered digital platforms that improve a user’s physical reality. Deploying persuasive design techniques supporting ease of use, personalization, and notifications are viewed as a net positive to allow people to do more with less effort. When data is addressed, it is in the context of running the system and to protect data from security risks and increase compliance issues. When AI is introduced, it is to help reduce cognitive loads.
Human-centered design got its start in Silicon Valley, but it has become more challenging to put these principles into practice because advances in automation and processing power are outstripping our cognition and free will. Users tend to skim web pages and make assumptions. These are taken advantage of by dark patterns. Companies exploit these behaviors by making a page look as if it is saying one thing when it is in fact saying another. Corporations now control the noise-to-signal ratios of their platforms and with the turn of a knob based on business goals and personal consumption habits. There are now legitimate questions if human-centered design can work when business models dominate the thinking of consultants, engineers and designers.
Most designers and engineers do not have a formal understanding of cognition and psychology, which can inform decisions when creating digital platforms. There has been a rapid use of system design informed by systems theory that rationally interconnects physicality, modules, interactions, and data to satisfy specified requirements of a product or service as a coherent entity. However, systems design does not address the emotional needs of people.
There is a new area called “computational design,” which leverages the capabilities of computation to expedite interactions, reduce friction, and deliver tailored human-centered experiences. It balances physical and digital interactions to support human interventions through algorithms and analytics that inform digital systems. Computational design does not seem to address the emotional needs of users either.
Most designers are not aware of the enormity of larger macroeconomic issues because they are relegated to expression and production roles. They do not have the authority to address issues of screen time, bad algorithms, and the emotional impact of technology on users. So the challenges that The Social Dilemma highlights : regarding the role of addictive online behavior and the role of algorithms in amplifying content distortions are not necessarily due to creating social platforms that are easier to use and consume. Management, engineers, and product managers have much more sway on the rationale and architecture of social platforms than designers.
Can we control the social dilemma?
Based on empirical evidence, the answer is no. Design and engineering do not have the knowledge or capabilities to integrate the values and concepts of humane technology into their thinking or affect product development that have financial and market pressures to scale digital platforms. Jaron Lanier was probably the most eloquent presenter and had the best suggestions of how to deal with the current dysfunction of our digital world. He said he does not want to destroy digital platforms but that “I just want them to reform so they don’t destroy the world.” Unfortunately Facebook has broken most of its promises to self-reform.
A large problem that was not brought up in the film is that the United States (unlike the European Union) has no real clarity on how privacy is defined in the digital era and pro-individual legislation on data rights and usage. The Biometric Information Protection Act (BIPA), The California Consumer Privacy Act (CCPA) and The California Privacy Rights Act (CPRA) are starting to impact the United States and could converge into an agreement around personal privacy. Without clear protections of how data is collected, aggregated, and used, social media will continue its current spiral.
The genie cannot be put back into the bottle, but can we adapt? The answer is yes. The adaptation model focuses on changing individual behaviors by reducing screen time, data detoxing by disabling data sharing (but would lead to more inconvenient internet usage), combat misinformation and join the humane technology movement. This puts the onus on individual action to protect oneself from what The Social Dilemma describes.
Joe Toscano, a former design consultant at Google and author of Automating Humanity, had an interesting proposition. Currently companies instrument their platforms with no regard to accountability and collect massive tranches of data which they use and resell to others. His premise is that this unfettered use of data has led to a lack of accountability. He has proposed a fee for collecting data, which would incentivize companies to reconsider if they need to vacuum up massive amounts of data due to the daily/weekly/monthly/yearly fees that would be levied and erode profit margins.
Creating ethical digital platforms
The Social Dilemma was an excellent documentary that demonstrated how the well-intentioned road to dystopia created troubling results that we are currently living through. The film’s only shortcoming was failing to fully flesh out the dilemma it put forth : “Never before have a handful of tech designers had such control over the way billions of us think, act, and live our lives.” However, it did highlight many shortcomings by both design and engineering and their willing, unwitting, or unwilling role in creating powerful digital corporations.
Tristan Harris, founder of The Center for Humane Technology, identified the following principles that could guide the creation, deployment and monitoring of conscious digital platforms that “…honor human nature, grows responsibly, and helps us live lives aligned with our deepest values”:
• Create a market that values humane technology
• Obsess over values, not metrics
• Emphasize physical interactions
• Internalize the pain your users experience
• Enable wise choices vs. more choice
• Nurture mindfulness
• Responsibly balance growth vs. maximizing profit
The business, design and engineering communities could take these key principles and connect them to business, technology and human-centered design efforts. This also means These guidelines imply that business, design and engineering educational programs have to integrate more social science course requirements to ensure the creation of ethical digital platforms. The Center for Humane Technology has also created a humane design guide.
After watching this documentary, I feel we cannot ignore the impact of social platforms and how they affect human to human, human to computer, and computer to computer interactions. The corporations that run them have failed to monitor and address glaring issues around how many are distorting truth and exploiting bias and fear. Withdrawing their shields and exposing them to liability like most publishers would force them to restructure the platforms that do not destroy our sense of individual agency and collective belonging.
Until then, it falls to each individual to protect themselves from the exploitative aspects of social media.