Since the announcement that Facebook will be going public, there has been a spike in discussions about online privacy and the rights of individuals to protect their personal data or to be protected from some other entities meaning of a persons data trail. This came into focus based on an Austrian student who wanted to get his Facebook data:
“Max Schrems, a 24-year-old law student from Salzburg, Austria, wanted to know what Facebook knew: He requested his own Facebook file. What he got turned out to be a virtual bildungsroman, 1,222 pages long. It contained wall posts he had deleted, old messages that revealed a friend’s troubled state of mind, even information that he didn’t enter himself about his physical whereabouts.” Source
He said upon reviewing his data trail on Facebook “It’s like a camera hanging over your bed while you’re having sex. It just doesn’t feel good.” Source
Now that the internet is quickly maturing into a global world wide web of personal data trails, the implications of personal preferences and tracked online behaviors is coming into focus as an intractable wicked societal problem. The discussions about online privacy cover so many dimensions based upon ever-changing personal concerns, technologies and data models.
The United States Supreme Court is still operating under a late 1800 definition of privacy which is defined as the ability to be left alone. Being left alone is antithetical to social sharing technologies that are continually fed by individual thoughts and actions through a digitally connected web. Yet the concept of sharing information and the concerns over privacy is nothing new:
“Anxieties over privacy came up when postcards were first sent in the late 19th century. The advent of photography prompted Samuel Warren and Louis Brandeis, in an 1890 article in The Harvard Law Review, to warn of the dangers of displaying private family wedding pictures in the pages of every newspaper.” Source
You are interacting with a network
We live in a world where people are accessing, documenting and sharing information through the internet. Because the internet was originally designed to be an open system for sharing, this initial decision has brought forth a wicked series of problems as it has grown in size, and sophistication. Everything you touch on the internet is an electronic marker which become electronic trails for individuals and companies to find you, your content and most important the intent to these trails in order to be predictive and to provide relevant offers.
“Personal data is the oil that greases the Internet. Each one of us sits on our own vast reserves. The data that we share every day — names, addresses, pictures, even our precise locations as measured by the geo-location sensor embedded in Internet-enabled smartphones — helps companies target advertising based not only on demographics but also on the personal opinions and desires we post online.” Source
The more dependent society is on digital technologies that are fed by databases, the more perceptions and decisions are made upon these electronic trails. Every time you use your credit or a supermarket discount card, you are feeding databases that use your patterns to place you in a demographic, psychographic or other actor group in order to make decisions on what future goods and services you might consider.
A data point may be interesting, but in and of itself may not tell much. Depending on the application, adding additional data points to your data becomes a data cube, or a series of metadata measures using multi-dimensional analysis. Amazon, Google and Facebook use their own proprietary algorithms to use past behavior to predict future preferences of one individual or a group of individuals. In terms of search engines and social media, these data aggregations are used to target products and services to people based on the meaning of these aggregations to the aggregator.
So in a sense, all we are is pattern. These patterns take many forms, usually in the way we use our day from sunrise to sunset. We either create or conform to societal patterns and they become powerful mental models of how we view the world. These patterns are based upon individual actions and behaviors that link with other actions to create larger narratives. Aggregated data works in the same way.
Because the internet in the United States was expanded by many thousands of companies, each with their own services, data models and intentions, the internet we know is an amalgam. However with the maturation of search through Google and social technologies through Facebook, these two companies are consolidating standards and have codified the advertiser model that depends on the aggregation of individual and group data to tailor products and services. Free never felt so tentative and unreliable. Then again, there is no such thing as free as it is still a contractual agreement.
Americans have been acculturated into expecting the internet as free. In order to use free services, the transactional value is your data being shared or in many cases owned by software-as-a-service (SAAS) companies. This personal data allows the internet to conform to online behavior or explicit personal preferences. This provides a more relevant online experience and filters out many things that are not of interest. In order to do this, an individual either needs to share information or allow cookies to be put on their unique IP address in order to track every single click. Each click or data input field adds layers to your electronic trail. This allows for frictionless transactions where a user does not need to keep logging in, or not dealing with irrelevant information. Just turn off your computer’s ability to accept cookies to understand the implications of not having them.
Privacy is in the eyes of the beholder
For digital natives who use these platforms, they have never known a world where they did not have continual access and a presumption of sharing information with others. This has become the new social currency communicating one’s status, associations and value. For digital immigrants who came from a world of discreet technologies and an older broadcast model, the new digital world feels nihilistic and predatory. Their values are not aligned with a contemporary social order that assumes everyone has to opt out – not opt in.
“If I’ve Googled “diabetes” for a friend or “date rape drugs” for a mystery I’m writing, data aggregators assume those searches reflect my own health and proclivities. Because no laws regulate what types of data these aggregators can collect, they make their own rules.” Source
The problem of sharing information is most users do not read or understand the terms and conditions of service and how their information is used by companies. Once personal stories are shared of the negative impacts of how this data is used, this causes the very same people to cry foul when the meaning of their data is used against them. While each of us has legitimate privacy concerns, how can we manage the security options that are provided to us by our digital service providers if we do not understand what the meaning of these options are? Most people do not change the default security or even the default parameters of any program that they use, out of ignorance or lack of understanding. SInce there is not a shared language or shared concepts of the use of data, the subject of privacy and the rules that govern it is essentially a black box.
For some these experiences that generate data border on stereotyping in the worst possible way because the intent of the user is not taken into account and makes irrelevant what is served back to the user.
“Stereotyping is alive and well in data aggregation. Your application for credit could be declined not on the basis of your own finances or credit history, but on the basis of aggregate data — what other people whose likes and dislikes are similar to yours have done.” Source
This stereotyping also transcends personal preferences and based on who you are, you will be aggregated with wider groups of individuals who have the same data points.
“Data aggregation has social implications as well. When young people in poor neighborhoods are bombarded with advertisements for trade schools, will they be more likely than others their age to forgo college? And when women are shown articles about celebrities rather than stock market trends, will they be less likely to develop financial savvy? Advertisers are drawing new redlines, limiting people to the roles society expects them to play.”
While there are cases when my data trail throws curves at advertisers and I receive the wrong product or service, for the most part what is offered is close enough to be interesting or even acted upon. There have been discussions about how much of an individual’s data should be available to companies and their ability to sell that information to others for future use. In Europe they have instituted data standards and what the rights of the individual and the limits by companies to use or sell individual data. This put Europe and the United States on a collision course because the American laissez-faire data environment runs on almost unfettered use of individual data for any number of uses by companies.
An acceleration of the problem
This issue is accelerating due to the proliferation of mobile platforms such as smartphones and tablets that use cellular or broadband networks while users are on the move. They can access, in many cases, the same digital platforms that are accessed on their home & office computers. The increased use of geolocation services and the more frequent data access of these mobile platforms have are bringing new opportunities for contextual advertising based on location, time of day and other behaviors to serve up more relevant options. With many applications now linked to larger social networks like Facebook, information from one applications is harvested and ported to another – many times without the understanding of the user – or their consent.
Based on personal experience as well as working with companies that thrive on user data, I have come to accept that I do not have any privacy and that all of my data cubes generated by signing-up, clicks, uploads, and posts will be available for many years into the future. This position does not mean that I don’t care about the creation of some baseline standard of privacy, data ownership and terms of data access by third parties. However, until there is this baseline, all I can do is be vigilant, and manage my digital ecosystem by being actively involved with preferences and settings. Unfortunately, for most users, this is beyond their interest, or comfort zone to do.
Are you ready?