Progress is usually treated as something positive as the very word is enjoyable to say and implies that a goal is being reached with good benefits. We believe that progress is what increases our quality of life and in turn is full of social, political and economic benefits. Since the enlightenment, there has been all types of progress that has improved the human condition.
Since the development of the internet and with increased dependence on digital platforms to improve our physical world, we have had many jolts that have given us pause on what progress is and if it is all upside and no downside. Tim Cook, CEO of Apple gave a commencement speech at Stanford, which is in the heart of Silicon Valley, on the problems of digital progress. He stated “Don’t bring a knife to a chaos factory . . . If you’ve built a chaos factory, you can’t dodge responsibility for the chaos. Taking responsibility means having the courage to think things through.” He was referring to the recent incidents of Facebook, but also to many innovations that has as many problems as benefits.
For example, Amazon Web Services has introduced a facial recognition program called “Rekognition” which can match photos and videos with databases and is being used by businesses and law enforcement agencies. This solution could make society safer by identifying individuals that are being sought by officials for breaking the law. Since facial recognition is fairly new, this same technology could be used as a way to control society by authorities. There have been many legitimate questions about the controls that Amazon has built into this platform so that this does not happen.
Amazon stated it has ethical guidelines on how Rekognition should be deployed and used. However, technology companies have promised many times that privacy is built into their technologies. To blunt these objections, Andy Jassy of AWS stated that “. . . abuses of the technology weren’t necessarily Amazon’s responsibility, because any tech tool can be turned into a weapon . . . You could use a knife in a surreptitious way . . . I strongly believe that just because tech could be misused doesn’t mean we should ban it and condemn it” Their position is that any technology can be used – and abused. However, this is exactly the type of technology that Tim Cook is warning us about that can cause chaos and Amazon is not taking responsibility for the possible abuses of Rekognition.
Kara Swisher in a recent New York Times article noted that “Tech’s ploy to assuage its growing chorus of detractors has been to beg for forgiveness over and over for making a mess, promise to follow laws once someone else figures out a way to manage the madness, and then hire an army of lobbyists to make sure those rules are not too onerous.”
Any new concept, framework or tool that has no previous history is an unknown quantity when it comes to macroeconomic social, political and economic consequences. Just like a pebble thrown into water, the initial splash which is the intent creates waves which emanate outward affecting the rest of the system, which cannot be directly controlled.
Science Fiction has a key concept that underpins many of its tropes about the law of unintended consequences. The premise is that tools and technologies often have dystopian consequences beyond the intended use which leads to many more problems. Technology companies focus on benefits and not alternative scenarios of possible misuses of their technology as these scenarios would be considered extreme edge cases or an externality. This is where Tim Cook states that these same companies need to take more responsibility and address possible misuses.
A key aspect to newer digital technologies like Recognition and the proliferation of services that could integrate it is that they take advantage of abstracted infrastructure which allows a creator to stitch together several different services that are not directly controlled to deliver value to markets. Each service has their own definitions of privacy and service level agreements which could conflict with other services and create risks unknown to the creator of the synthetic service.
A case in point is the use of Rekognition used by a California law enforcement officer who input an underage sex victim’s photo into Thorn, a non-profit that focuses on exploited children in partnership with Facebook, Amazon, and Dropbox. Their application called Spotlight has an API to Rekognition and it returned a list of online sex ads featuring the victim’s photo. This was a good example of the use of Rekognition. On the opposite side, the Chinese government is using facial recognition to track all of its citizens and create digital dossiers in order to control its population. Same technology, different use cases.
A key problem is the United States and many other countries have no shared definition of privacy and each company is defining what it means to them and their markets. There has been a bill in the United States congress called the “Commercial Facial Recognition Privacy Act of 2019” which attempts to put limits on the commercial use of facial recognition technology and to protect consumer privacy. Europe is an exception, and has defined and implemented the General Data Protection Regulation (GDPR) framework, which gives specific rights to EU citizens to have specific controls over their personal data and also simplify the regulatory environment for the collection and use of data by businesses so both citizens and businesses can mutually benefit from digital technologies.
As societies rely more and more on digital platforms, the free market model with conflicting service level and terms of service agreements is not working. Governments will need to take an active role in regulating these platforms to curb possible abuses and risks and work with the private sector in continually reviewing regulations to keep pace with actual cases of new types of abuses. If large technology companies do not work with governments on technologies that have major societal implications, then the responsibilities that Tim Cook spoke of with destabilize and erode societal relationships between people, groups and governments.
Hi. Just read your article on the unintended consequences. Thanks. Good stuff. Two ideas came blazing to my thoughts. The first being 4th amendment rights and ethics. As in, you’re doing it wrong big tech. The second being products based. Get the 1871 crew to build an app that locks your digital trail, you allow limited access levels, you get paid for sharing. Again thanks, still fun to think about futurism.
Hey Tom. Thanks for the feedback. As for your points : The 4th Amendment to protect against unreasonable searches and seizures is an interesting angle. Did the founders conceptualize that information about you and who you are could be “seized”? From my perspective the main problem is the US has struggled with what privacy is from a constitutional standpoint and we do not have any shared agreement of what privacy is from an information standpoint. The current way we have structured the internet and digital platforms uses direct and inferred information on our actual consumption habits and so how can it deliver relevant content to you if your digital behavior is protected? As for creating a digital dossier, sort of like a medical record that you can determine who has access to what also sounds interesting.