AI, amplification, and the future of being human

I recently read the MIT Technology Review on 10 Breakthrough Technologies in which there are many articles throughout that described a variety of innovations powered by artificial intelligence (AI). This made me reflect on the many hopes, fears and hype about AI which is best described simultaneously an algorithm, an underlying infrastructure and ever-growing actor in making products and services smarter and more relevant to humanity.

The prolific inventor Ray Kurzweil has always hoped for a true end-point to AI, which he refers to as “The Singularity.” He defines this as that state at which AI becomes “conscious” and has independent agency. His vision is a logical destination of humanity’s long history as a toolmaker to increase human capability and shape destiny. Over time, these discreet tools turned into complex machines in the rapid age of industrialization. Many human crafts became mechanized, such as textiles, wood processing, and steel mills. This created mass production and economies of scale that started a new era of world trade and wealth creation.

This discussion is in five parts.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Part 1. The road to AI began as a mechanized robot

It was from the ashes of World War I in 1921 where writer Karel Capek in his play “R.U.R.” coined the automata term “robot” for machines that looked like humans but used for servitude. The anthropomorphization of robots was a natural progression of centuries of mythologies that created beings that were in the form of humans, but were something other such as The Turk that were entertainment curiosities.

Science fiction then built upon robots through the writings of writer Isaac Asimov. In his short story “Runaround” he created the “Three Laws of Robotics” which were programmed into robots to ensure they faithfully served humanity. These “laws” remain so convincing many people wrongly assume they are an accepted standard in current robot development:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

“Robots are machines built upon what researchers call the “sense-think-act” paradigm. That is they are man-made devices with three key components” sensors that monitor the environment and detect changes within it; processors or artificial intelligence that decides how to respond; and effectors that act upon the environment in a manner that reflects the decisions, creating some sort of change in the world of a robot.” (PW Singer, Wired for War, p. 67)

It took another world war not to advance robot development, but to create primitive analog computers to do cryptography. Mathematician Alan Turing not only developed the concept of a “universal machine” that could, with enough time and data, solve any problem, he also built on these wartime efforts to develop the Turing Test in 1950. This test proposes that a machine can exhibit intelligent behavior equivalent to, or indistinguishable from, a human. He originally opened it up with the quote: “I propose to consider the question, ‘Can machines think?'” which then changed to “Are there imaginable digital computers which would do well in the imitation game?” This pivot from“think” to “imitation” is important because “thinking” was believed to be exclusively in the realm of human consciousness.

Parallel to Turing’s work, Vannevar Bush and Norbert Weiner went into greater detail about electronic data processing and cybernetics. They were not concerned about imitating human form as a visual illusion with limited servitude capabilities, but in systems that could enhance human thinking devoid of human form as a powerful mental proxy for humans. These systems would assist humans in their efforts by being able to find patterns and run many scenarios to detect a specific signal within the noise.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Part 2. The birth of amplified intelligence

One of the more powerful, almost presaging, papers on AI is Vernor Vinge‘s PhD dissertation “The Coming Technological Singularity: How to Survive in the Post-Human Era,” published in 1993. Vernor was able to synthesize the intent of AI with the technological developments that were coming together at the time to provide a critique of pervasive technological environments. The power of Vinge’s dissertation is how much he correctly predicted, such as a worldwide Internet, decision support systems, local area networks to empower teams, and digital limb prosthetics. Here he reflects on the speed of change that humans unleashed on themselves:

“From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen in “a million years” (if ever) will likely happen in the next century. . . Greg Bear paints a picture of the major changes happening in a matter of hours.)” Page 2

Like Kurzweil, Vinge felt that the Singularity would happen, but that a series of digital systems would be needed to amplify the intelligence of humans. Humans would interact with embedded, autonomous and self-aware devices informed by networks and interfaces. He called this “Intelligence Amplification” for “Superhumanity”.

“We will see automation replacing higher and higher level jobs. We have tools right now (symbolic math programs, cad/cam) that release us from most low-level drudgery. Or put another way: The work that is truly productive is the domain of a steadily smaller and more elite fraction of humanity.” Page 3

For Vinge “The problem is not that the Singularity represents simply the passing of humankind from center stage, but that it contradicts some of our most deeply held notions of being.” (Page 8) Ego and self-awareness is humankind’s most important differentiator from AI systems and is something that we should not give away. He ended his dissertation with a quote from Freeman Dyson “God is what mind becomes when it has passed beyond the scale of our comprehension.”

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Part 3. On the road to superhumanity

Since 1993 we have seen an accelerated progression of storage, compute and network that has knit together bare metal infrastructures. These are now abstracted into virtualized systems that can scale up and out with low latency to support our growing reliance on digital platforms that deliver on-demand smarter products and services.

AI needs five things to power it :
  1. Corpus of knowledge or basic principles as a foundation
  2. Vast amounts of data based on volume, velocity, variety, veracity and value
  3. Algorithms that can modify the foundation over time
  4. Metadata and analytics that deepen the meaning of data itself
  5. Low latency network with scalable compute/storage to support the intent of AI

These factors have come together to make the benefits of AI integrated into more and more things we use and what we do with them.

“Ray Kurzeweil has found that the challenge isn’t just inventing something new, but doing so at just the right moment that both technology and the marketplace are ready to support it. ‘About 30 years ago, I realized that timing was the key to success . . . Most inventions and predictions tend to fail because the timing is wrong.’ “ (PW Singer, Wired for War, p. 95)

With the developments of solid state storage, the emergence of 5G networks and the growth of edge computing, we are moving into a bizarre world of interconnected devices that can consume our behaviors through our tablets, smartphones and the internet of things. These can now serve up content and experiences tailored to accelerate specific tasks, which Vernor Vinge saw coming back in 1993. In this world, the dog and the tail are one and it is getting increasingly difficult to define “reality.”

The intention of AI is to accelerate progress and to create new types of value across a wide variety of industries. The goal of AI outputs is to deliver insights to nontechnical people and create confidence in AI’s results, insights and recommendations for targeted judgement. MIT Press “Little Green Book on Data Science” states that AI is “an amalgam of technologies and disciplines connected by data science that finds non-obvious patterns for clustering, associations, anomalies for actionable insight incorporating computer science with statistics to improve decision making.”

Since the maturation of cloud platforms, virtualization and instrumented objects the world generates approximately one exabyte of data per day. To put that in context, five thousand years of documented history created five exabytes. Given the volume, velocity, variety, veracity and value of our own data trails through text, audio, video, image, as well as the newer types of data and meta-data from sensors, making sense of all of this data is daunting. Transforming this information into insights and knowledge that provide value is even more so.

To help sort through data overload, AI is using neural networks for deep learning. These networks can automatically define useful attributes from representational learning and find appropriate generalizations from the data, which requires little to no intervention from humans.

The good news is AI algorithms are indeed modifying themselves and getting better and better at what they are intended to do. David Sliver of Deepmind created AlphaZero and the successful AlphaGo program which mastered the ancient Chinese game Go and beat the best human players. They then created a new version, but wanted to know if the algorithms could learn a game without creating a corpus of knowledge based on human experience and just focus on abstract principles. AlphaGo learned on its own and used moves on human players that were unorthodox but legal within the rules of the game and effective. This shift from supervised learning to reinforcement learning where systems learn by themselves and essentially become more creative as the system gains deep understanding.

The bad news is that these algorithms are in a black box of ambiguity. We do not know how the algorithmic decisions are being made. There is no map or single log file which can decipher an AI explaining a decision it has made. All we have is to review are the outputs/results. Many AI scientists are concerned about this Pandora’s Box because much of our physical and digital infrastructures are increasingly being controlled by automated computer networks. There is little runway and few manual overrides provided to the humans who are supposed to have the ultimate judgement over the machines’ decisions. If a machine system goes out of control, all we currently have is the equivalent of a big red button to turn it “off,” with possible catastrophic results.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Part 4. Current Uses and Possible Pitfalls of AI

Productivity, wealth creation and improvements in standards of living are powering the current explosive expansion of AI. Like the industrial revolution that super-charged global trade and capitalism in the 19th century, machines are being used to assist humans in producing more good and services in shorter amounts of time and to a higher standard of mass-produced quality. The difference is that during the Industrial revolution machines were viewed as augmenting the goals of business and labor. Machines did not have an independent sense of agency, and humans certainly did not view them as having agency. However, the industrial revolution did have adjacent effects on the redefinition of labor and work as well as the creation of new types of jobs, the birth of unions, five day workweeks, and the rise of the middle class.

We have also seen incredible stresses in equity and social cohesion due to automation and the proliferation of digital platforms. Many people can now access the digital tools needed to communicate one-on-one or one-to-many, but the size and veracity of the bullhorn determines the level of noise they can make. The signal can also now be subverted to nefarious ends since most people cannot determine whether or not something is true. Truth was once a shared value of understanding but has now splintered into infinite shades of gray, resulting in the current plague of “fake” news based on overt human prejudice, something algorithms are not able to consistently identify.

In the world of social media, one person may have their data stored on anywhere from 250 – 1,000 databases. Without the use of AI, much of this data would be too complex and fractured for humans to use. However, machine algorithms have no problem parsing massive and disparate data but are not yet able to filter out inherent human biases. They are essentially still human creations with our explicit, implicit or unintentional prejudices programmed right into them. MIT Press Little Green Book on Data Science rightly states “The more consistent a prejudice in society, the stronger that prejudicial pattern will appear in the data about that society and replicate the patterns in the data . . . individuals may be treated differently not because of what they have done, but due to data driven inferences about what they might do.” 

There are valid questions concerning the accuracy, security and privacy of data. The United States has no overriding rules on how data is generated, collected, analyzed and re-aggregated that protect both individual privacy and business needs. To some, the internet should be a basic right and a digital commonwealth that serves the collective good. To others, the internet is a shared utility that is used for private gain. These two views are not mutually exclusive and can co-exist like matter and antimatter in the same space.

Consider this: how can digital platforms create personalized and customized experiences if personal data is not associated with specific personal information (PI) that align to specific preferences and relevancy? However, commercialization of personal data through digital profiling is highly controversial as we saw with Facebook and Cambridge Analytica. Many companies claim they are using derived, aggregated or anonymized data in order to reuse it for future unforeseeable business needs, but is this a valid argument if we cannot agree on objective standards?

Europe has taken the first step in passing the General Data Protection Regulations act takes a first step toward framing an agreed upon set of rules that determines the rights of the individual in how their data is collected and used. It is still too early to know how effective GDPR will be and if it will influence other nations to adopt it since the internet and AI does not necessarily recognize nation state borders.

The privacy of digital platforms is not the only issue facing AI. Isaac Asimov’s Three Laws of Robotics, long accepted as canon, are being called in question as physical and digital robots proliferate. In 2016, UNESCO science experts viewed robots as possibly deserving legal rights if they develop the ability to feel emotions and are able to distinguish between “right” and “wrong,” thereby becoming “moral machines.” Systems such as AlphaZero’s AlphaGo and the acceleration of reinforcement learning may eventually evolve into systems that seem as if or actually have they have a sense of independent agency. While this is not a new concern — consider Arthur C. Clarke’s “2001 A Space Odyssey,” in which an intelligent digital platform, Hal, turns on the astronauts it was meant to serve — it is now becoming more real.

The panel stated “Robotics remains both ethically and legally under regulated, probably because it is a relatively new and rapid changing field of research whose impact on the real world is often difficult to anticipate.” As AI powers more autonomous systems and their ability to be identified as highly intelligent will force humanity to reclassify non-anthropomorphic systems as having a sense of agency, or at least defining a set of human ethics on the creation, deployment, and use of these systems.

An example of this is Boston Dynamics series of walking robots that are based on anthropomorphic traits. One of the most interesting is the robot “big dog” created in 2005 for DARPA to be an assistive mover of equipment for the military. A newer version called “Spot” is quieter, only 3 feet high and around 55 pounds. Soldiers may become emotionally attached to “Spot,” given that the robot moves like a canine and that “Spot” is a common name for a pet dog. When in battle, if Spot is in danger or becomes “wounded,” will soldiers risk their lives because they have empathetic feelings for the robot? It may sound unlikely, but in actuality was already an issue in the Persian Gulf.

The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems has released a draft “A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems” in which it seeks to define standards and certification for complex intelligent systems ” . . . to remain human-centric, serving humanity’s values and ethical principles.” It seems as if any intelligent system, even if it is intelligent, should be controlled and monitored by humanity.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Part 5. AI, ubermensch and and the future of creativity

Every technology has inherent beneficial uses and possible dystopian misuses. Humanity has benefited from digital technologies and platforms and will continue to do so. The evolving application of AI to will help governments, businesses and individuals be more productive and create more wealth but along with these gains, we will also experience the detrimental effects of social media and bots that distort information.To the point that David Brooks from the New York Times noted “We’re a nation coming apart at the seams, a nation in which each tribe has its own narrative and the narratives are generally resentment narratives.”

The contemporary transformation from analog machines to digital platforms will have histories and abilities to increasingly self-regulate themselves without human intervention. We cannot unequivocally know that these digital systems will not have a new type of agency because we may not yet fully understand agency that is not defined by humans. These new types of entities are not bound by place nor have intelligence in a single place like one brain, but is a network of beings that can be one and many at the same time transcending time and space.

Going back to the MIT Technology Review on 10 Breakthrough Technologies, Sean Dorrance Kelly wrote an interesting counterpoint article called “What computer’s can’t create.” His thesis is that creativity is the most mysterious and impressive of human achievements. A good example of this in the article was the definition of the Theory of Relativity. Many scientists had pieces of the mechanics of the theory, but Albert Einstein weaved these independent pieces into an ” . . . original, remarkable, and true understanding of what the equations meant and could convey that understanding to others.”

To Kelly, much of the use of AI ” . . . work by simulating and channeling the creative abilities of the human artist (and reflect the initiations of those abilities).” He goes onto state that “We may be able to see a machine’s product as great, but if we know that the output is merely the result of some arbitrary act or algorithmic formalism, we cannot accept it as a the expression of a vision for human good.”

Though AI, humanity may have set up an unintentional Faustian bargain: by allowing smart systems to “learn us” and anticipate our unarticulated needs in order to gain greater convenience, we risk eroding what makes us human. Like Vernor Vinge, Kelly is concerned that that humans are all too willing to hand over to digital platforms and the algorithms that power them our agency and our ” . . . essential character of reasoning . . . and to treat ‘creativity’ as a substitute for our own, then machines will indeed come to seem incomprehensibly superior to us.” As anyone who has seen any episode of Black Mirror, or movies such as Terminator, Ex Machina or Her, it seldom ends up well for humans.

Friedrich Nietzsche defined Übermensch as a goal for humanity in which we surpass the mainstream and give greater meaning to existence through a new set of values. Surpassing then becomes a question of context. If you were able to compare a human being from the year 1000 with a human being from the year 2019, you might not find an appreciable difference in physicality, but there would be one in visible behaviors, knowledge and values. This may lead you to assume the contemporary human is more advanced or enlightened but the yardstick as to what makes us human is a moving target. If AI becomes independent with its own rules and values, but without being “human”, then we will have created a new intelligence that is symbiotically intertwined with humanity with incredible social, political and economic consequences. The Freeman Dyson quote mentioned earlier could be modified to say “AI is what mind becomes when it has passed beyond the scale of human comprehension.”

Facebook
LinkedIn
Twitter
View Comments

3 thoughts on “AI, amplification, and the future of being human

  1. Very impressive. In fact, I challenge all the compute power in our world with all knowledge at disposal would never be able to generate such a precise, concise, authoritative viewpoint. Even if a machine were to create this story from its own generated template structured into similar 5 part essay. A veritable Turing test candidate.

    But then, like ‘Spot’ the army dog robot, emotions are really what we as humans experience and even if Spot reciprocates, we cant ascribe it as its own emotions, right? Tech narratives seem to explore the extremes of human like emotions on one hand and chore replacing automation on the other.

    I believe the path of evolution would emerge from a consideration of how humans and these intelligent machines interact. The outcomes of such interactions are likely to be the dots on the curve of progress. Not anthropomorphism. Not any narrative we think we control.

    Your conclusion is very powerful, that is let us not view evolution of AI from a static, human centered perspective. Instead that both the humans and AI both continue to evolve and if AI is to serve humans, humans needs are a shifting goal post. And the path will lead us through unimaginable paradigm shifts that when they kick in, will alter the path. More importantly a destination that may not be the singularity some see as what all development will converge to

  2. Love this. I’m 100% for disruption and change. I recommend The Inevitable, by Kevin Kelly, and The Industries of the Future, by Alec Ross. Ray Kurtzweil’s The Singularity is also excellent.

Leave a Reply

Your email address will not be published. Required fields are marked *