With all the focus around augmented reality, the trend to geolocate and post data layers on top of specific locations is actually only a few years old. Society has embraced these technologies through smart phone applications where your friends, family and social networks tag physical spaces with comments, images, video and other bits of data. A user’s ability to review and modify this content through their feelings and experiences within those spaces demonstrates how powerful this concept has become.
Hollywood and television are seamlessly integrating digital communications environments in which protagonists and antagonists battle it out through encrypted and open networks to achieve their aims. On the edge of these scenarios have been intense discussions if these developments are changing the definition of what it is to be human and if we will be able to control technologies that are supposed to help us.
There seems to be three levels of augmenting human reality:
• the use of external digital networks to help humans with their daily activities
• the use of implanted digital devices and chips inside human bodies to monitor or augment physical disabilities or automate individual need
• the use of quantum computing and complex algorithms to find patterns in complex computational contexts
All three represent a move towards intelligence amplification, extending human cognitive abilities to understand relationships between situations and data patterns. With the rise and increasing dependence on computers, networks and real time flow of data, comes physical and philosophical opportunities and challenges to what makes us human.
Humans vs. Machines?
During my tenure at Tanagram Partners, it’s DNA was rooted in providing digital systems that created relevancy by maximizing cognitive and workload assist for increased performance improvement. One project in particular — an augmented reality system for asymmetric warfare created for DARPA — provided the rocket fuel for our ideas.
In researching the project one particular book, “Wired for War” by PW Singer, provided an informative backdrop to how robotic and digital systems are changing the very definitions of warfare and what it currently means to be a “soldier.” Singer highlighted a PhD dissertation that has become an underground favorite of computer scientists and developers. It has made quite an impression on my thinking about the relationship between humans and digital technology.
“The Coming Technological Singularity: How to Survive in the Post-Human Era” by Vernor Vinge of San Diego State University (pub. 1993) articulated an incredible view of the future of computing and intelligence. This eighteen year-old paper, presented before the explosive growth of the internet and virtual reality, explores key issues we take for granted today. It created a bridge between computer scientists’ desire to reach the era referred to as the Singularity and the between time on the road to total human/digital integration.
It may be helpful to first quickly describe a few key landmarks that should help to provide a better understanding of Vinge’s ideas. Intelligence is an intriguing word that has social, political and economic dimensions. It is used in many cases to elevate certain populations and to keep others supressed. In the post-industrial world, intelligence and intellectual capital are held as a competitive advantage between countries and other powerful entities. When intelligence is linked with computer systems, the philosophical dimensions of the conversation become exponentially more complex – and interesting.
We have long been fascinated with the notion of artificial beings with mechanical thinking that could be of use to the human race. In 1921, the playwright Karel Capek is credited with the first use of the word ‘robot,’ which was derived from the czech word for forced labor or serf. His play, R.U.R. (Rossum’s Universal Robots), dealt with the concept of robots and their enforced labor for humans. It does not have a happy ending. Capek’s message is that if an intelligent entity developes another intelligent entity stictly to do perform repetitive, dangerous or frightening tasks, it will end in misery for them both.
But how does one determine a robot or system to be intelligent? The British mathematician Alan Turing, generally acknowledged as the founder of computer science, explored this question in his 1950 paper Computing Machinery and Intelligence. It opens with the words: I propose to consider the question, ‘Can machines think?’
In the Turing Test, player A is a computer and player B a person and person C is the interrogator. The role of the interrogator is to determine which is a computer and which is a human. This process is also called the “standard interpretation” where the interrogator cannot differentiate which responder is human and which is machine. This model sounds reasonable enough, until you examine it further and realize it is fraught with misinterpretation:
“The Turing test is based on the subjective opinion of the interrogator and what constitutes a humanlike response to their question. This assumes that human beings can judge a machine’s intelligence by comparing its behaviour with human behaviour. Every element of this assumption has been questioned: the human’s judgement, the value of comparing only behaviour and the value of comparing against a human. Because of these and other considerations, some AI [artificial intelligence] researchers have questioned the usefulness of the test.” – Wikipedia Article on the Turing Test*
Rating any intelligence model is not straight forward. Even humans, generally considered the current gold standard of intelligence, are not necessarily consistently rational. There is some question as well as to the actual seat of human intelligence. Does it truly lie in the mind (conceptual) or the brain (physical) or both? Can they ever be viewed as separate entities? Is the mind completely dependent on the brain? Unlike the brain, will the mind ever be explained through purely physical descriptions?
We may also be starting to see early signs that all seemingly intelligent behavior may not necessarily be human, or even biological. Researchers from Cornell University recently demonstrated a conversation between a chat bot and a clone of itself. They were surprised by the spontaneity of the conversation that bordered on ” . . . a Samuel Beckett play . . .” Turing himself noted that, a sonnet written by a machine will be better appreciated by another machine.
Into the 1980’s, the field of artificial intelligence (“AI”) focused on the definition of intelligence as human intelligence. For many decades creating machines that could emulate these attributes was the holy grail of researchers. The idea of intelligent yet subservient machines catering solely to the needs and desires of humankind captured the popular imagination. The overall goal of AI — to create a machine that can process information as intelligently as can a human — has not yet been realized, but work in the field has spurred the development of expert systems forward.
In the 1990’s, the rise of personal computing power and the speed of chips and graphic processors made the idea of virtual reality possible, however primitively. People became enamoured by the idea of seamless digital environments in which they could be mentally immersed, interacting through proxy avatars. Programs such as Second Life became all the rage, although the number of active participants vs. registered users was incredibly lopsided. Unfortunately, the concept of Virtual Reality far exceeded the actual experience. It still took an inordinate amount of computing power to describe digital environments. The lag time between what a person’s mind could envision and what a computer could provide as feedback were still numbingly disparate.
Closing that gap is the holy grail of today. With the current growth of computing power, humans are becoming increasing reliant on computers, computer networks and systems thinking. The anticipation (others may refer to it as a “concern”) is that our digital/mechanical systems may be on the verge of being able to process information more quickly than our biological/human systems ever will be. The time for the re-exploration of the computer/human dyad is approaching.
The Beginning Reign of Superhumanity?
Which may bring us to the end of the reign of “human intelligence,” aka the Singularity. Vinge recognized that this re-exploration may not happen the way experts, alarmists and science fictions writers have envisioned it — a them vs. us scenario, in which the human race winds up subjected by a totally integrated digital neural network (as per The Terminator). Rather, this new model would be subtle, accessible and completely voluntary. In essence, Vinge suggests that rather than fight the machines, we will literally join up with them.
He disambiguated his premise into several parts creating a cognitive bridge that defined where humans were in their relationship with digital systems. He then specified an augmented relationship between humans and computers, recognizing that the future has several scenarios all indicating aspects of the singularity:
• The development of computers that are “awake” and superhumanly intelligent.
• Large computer networks (and their associated users) may “wake up” as a superhumanly intelligent entity.
• Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
• Biological science may find ways to improve upon the natural human intellect.
This, however, brings up the notion of agency or state of being and what entity has control over another. Society still views computers and technology as subservient to human goals and objectives (the lessons of R.U.R. have yet to sink in). Our current perceived state of agency is that humans control technology and that computers improve the human condition. A computer’s agency is still to follow human desires. Vinge does thoughtfully credit what makes humans unique as living entities on earth, but contrasts it with the potential of computational power of computers :
“We humans have the ability to internalize the world and conduct “what if’s” in our heads; we can solve many problems thousands of times faster than natural selection. Now, by creating the means to execute those simulations at much higher speeds, we are entering a regime as radically different from our human past as we humans are from the lower animals.”
Vinge does discuss his doubts as to the exactness of the Singularity and if computers could ever reach parity with the human brain. Commercial digital signal processing might be awesome, giving an analog appearance even to digital operations, but nothing would ever “wake up” and there would never be the intellectual runaway which is the essence of the Singularity.
Instead of latching onto a pure definition of the Singularity, he floats the term superhumanity to deflect the separation of machine and human designation models. “ . . . minds can exist on nonbiological substrates and that algorithms are of central importance to the existence of minds.” Computers will no longer be subservient to man, but integrated where the human/computer dyad becomes one.
This is where Vinge describes a middle ground, or an in-between state between the “now” and the Singularity. He calls this period intelligence amplification where ” . . . every time our ability to access information and to communicate it to others is improved, in some sense we have achieved an increase over natural intelligence.” He felt that “Building up from within ourselves ought to be easier than figuring out first what we really are and then building machines that are all of that.”
On the Road to Singularity – Intelligence Amplification
Instead of computer and digital systems being on a separate developmental track from humans, intelligence amplification recognizes the mutuality between humans and computers weaknesses and strengths and creating a win-win collaboration between the two. Vinge then describes specific applications for intelligence amplification:
• Human/computer team automation: Take problems that are normally considered for purely machine solution (like hill-climbing problems) and design programs and interfaces that take a advantage of humans’ intuition and available computer hardware.
• Develop interfaces that allow computer and network access without requiring the human to be tied to one spot, sitting in front of a computer. (Ironically, this specific point has been reached through smart phone applications and cellular and broadband networks).
• Develop more symmetrical decision support systems. A popular research/product area in recent years has been decision support systems . . . As much as the program giving the user information, there must be the idea of the user giving the program guidance.
• Use local area nets to make human teams that really work (groupware). The change in viewpoint here would be to regard the group activity as a combination organism.
• Exploit the worldwide Internet as a combination human/machine tool. Of all the items on the list, progress in this is proceeding the fastest and may run us into the Singularity before anything else . . . a biosphere as data processor recapitulated, but at a million times greater speed and with millions of humanly intelligent agents (ourselves). (The United Nations just developed a report stating that internet access is a human right.)
With the integration of humans and computers, Vinge brings it back to “superhumanity” where:
“Intelligence Amplification undercuts the importance of ego from another direction. The post-Singularity world will involve extremely high-bandwidth networking. A central feature of strongly superhuman entities will likely be their ability to communicate at variable bandwidths, including ones far higher than speech or written messages. What happens when pieces of ego can be copied and merged, when the size of a self-awareness can grow or shrink to fit the nature of the problems under consideration? These are essential features of strong superhumanity and the Singularity. Thinking about them, one begins to feel how essentially strange and different the Post-Human era will be — no matter how cleverly and benignly it is brought to be.”
One of the possible outcomes of this integration is the changing role of jobs done by automation and jobs traditionally done by humans. “The work that is truly productive is the domain of a steadily smaller and more elite fraction of humanity. In the coming of the Singularity, we are seeing the predictions of true technological unemployment finally come true.” In another recent report by the United Nations, robots are increasingly doing more and more tasks that were once the domain of humans such as assembly lines, dangerous tasks and home automation due to price of robots in proportion to human labor costs.
Intelligence amplification is already here. It is increasing human’s cognitive and workload abilities by helping our brains find patterns through computational power and pattern recognition many times faster and more efficiently than the human brains alone could ever work. With the proliferation of microchips in objects and increase in wireless networks, many common objects that were purely mechanical are now able to communicate information and even communicate with other objects. Objects are turning into pipes for instructions, messages and information, or transferring data. These messages are part of a world of delivered services. Without the service, the value of the object is degraded. We are automating the world around us and that automation can, to varying degrees, now regulate itself based on the instructions we have embedded in it. This is not an intelligence, just an intelligent way to create ways to help humans live better lives.
There are many contemporary articles that discuss the concerns of human dependence on digital technologies which are rewiring the brain and changing social interactions. Linda Stone, a software executive who has worked for both Apple and Microsoft, calls current multi-modal behaviors continuous partial attention. She described this as:
“we are so busy keeping tabs on everything that we never focus on anything. This can actually be a positive feeling, inasmuch as the reason many interruptions seem impossible to ignore is that they are about relationships — someone, or something, is calling out to us. It is why we have such complex emotions . . . feeling alternately drained . . . and exhilarated when we successfully surf the [digital] flood.”
Contemporary society is grappling with the benefits and unintended consequences of using digital technologies. The New York Times reported about authenticity in a digital age. They referred to the Pope’s “Truth, Proclamation and Authenticity of Life in the Digital Age,” in which he discusses the increasing involvement of humans using digital technologies to create an online life “inevitably poses questions not only of how to act properly, but also about the authenticity of one’s own being.” The Pope added that “there is the challenge to be authentic and faithful, and not give in to the illusion of constructing an artificial public profile for oneself.”
While we are experiencing a wide variety of behaviors impacted by social and other technologies, no one yet knows if the brain is being rewritten at a biological level or if social behaviors are instead being adapted in response to our continual access to the fluid benefits of these technologies. The discussion of digital natives vs. digital immigrants demonstrate a change in attitude towards privacy, what defines friendship, and communication eiquette.
The question is: how do we adapt to this intelligence amplification and to our ability to rationalize its effect on ourselves, our human relationships and our concept of human agency, as well as an emerging recognition of the new human/digital network? Vinge’s paper presents a wonderful, reasonable and very understandable point-of-view of the role of humans, computers and the integrative potential of unity rather than mutual exclusivity or the emulation of human intelligence and behavior by digital systems.
In conclusion, he reflects on the Singularity and intelligence amplification by quoting Freeman Dyson, “God is what mind becomes when it has passed beyond the scale of our comprehension.”
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
* (footnote: Turing developed a variation of this test in which player A was a female and player B was a male. Voices responses were then altered to disguise the player’s gender. The results proved to be consistently baffling).