Recently, I watched the documentary “Kubrick by Kubrick.” Stanley Kubrick was one of the most innovative, daring, and reclusive film directors who rarely gave interviews and thought his films should speak for themselves. Over 45 years, he made thirteen feature films covering a wide swath of genres. All captured the contradictory fundamentals of good and evil in humanity, not as polar opposites, but as a subjective maze where both exist within each human being.
Monro, can only do what the French are excellent at – be a social commentator by mapping a semiology of meaning from seemingly disparate parts to create insights into the intersections of logic and desire. Kubrick by Kubrick decomposes five essential films through several conversations between Kubrick and Michel Ciment, the well-known French film critic and editor of the cinema magazine Positif. These edited discussions explore many storytelling, directing, and interpretation topics – with Monro trying to get to the essence of Kubrick’s intent as a director.
His most famous film, “2001 : A Space Odyssey” adapted from futurist Arthur C. Clark’s book of the same name, is about a fateful space mission of Discovery One to Jupiter. Essentially it explores the relationship between Dave, an astronaut, and Hal 9000, the ship’s ultra-intelligent computer who evolves from playing the astronaut’s benign helper to dystopian protector. The mastery of this film is its psychological exploration of man and machine: Hal 9000 is not an anthropomorphic robot but an abstract glowing red lens contained in a black rectangle at consoles all over the ship. This abstraction makes Hal 9000 even more powerful because it is an omnipresent psychological presence using conversational language. Happy or sad communication is delivered at the same cadence and tone – devoid of emotion that could hinder the mission.
After watching the film, I began to ponder the year 2023 because it feels like a pivotal period for artificial intelligence and the many advances in machine learning and the current discussions about artificial general intelligence. AI is now a gushing gold rush full of irrational exuberance, becoming the main topic of conversation due to the release of ChatGPT and its visual equivalent program, Midjourney. Both are generative AI platforms that use neural networks to strengthen and change basic algorithms through continued use and modify these algorithms into more sophisticated responses through many iterations.
Sustaining technology past the hype cycle
Technology always moves faster than our ability to understand or actively shape it. The Gutenberg Press took fifty years to become a distributed technology that democratized information through native languages and reduced the cost of production and distribution. Today, a fifty-year equivalent is now measured in three-month quarters because of real-time integrated communications supported by instantaneous deployments and proliferations available through global networks.
In 1993, American computer scientist Vernor Vinge published his dissertation “The Coming Technological Singularity.” Building on John von Neumann’s work, Vinge described at some point, computers would achieve independent intelligence called the “singularity.” Until then, Vinge posited that computers would be used for intelligence amplification in service to humans. Humans would always be the beneficiary and controller – until computers could steadily gain greater agency over time. In 1993, the singularity seemed very far away.
Until the recent introduction of commercially usable AI, machine learning systems were viewed by the general public as clumsy with limited utilitarian appeal. A good example of basic AI systems are voice assistants, which have limited parameters but can “learn” patterns and prescribe limited outcomes. In 2010, Apple introduced Siri as a voice-based virtual assistant to implement the very tactical needs of users but was plagued with problems and never had the full attention of Apple. Amazon Alexa, introduced in 2014 as a vision of Jeff Bezos, became what Siri intended to be, an easy daily helper for people to interact with their environment and automate specific day-to-day needs. Its blueprint framework allows easy integration into various products in a world of microservices and is now a prominent example of voice assistants. Alexa came at the right time for scalable and distributed cloud computing, making language interactions more natural and sophisticated.
The difference between Alexa and ChatGPT or Midjourney is the generative aspect of machine learning. Alexa may suggest things to a user based on use, but its AI is designed for specific problem-solving by becoming good at one type of pattern-making and training over time to hone outcomes and make better decisions. Generative AI systems like ChatGPT or Midjourney also train and find patterns but become more self-directed in broader contexts and can produce more variations, new content, and designs that humans can use to inspire further human creativity and ingenuity.
From my experience with human acceptance of specific technologies, two key ingredients are needed: compelling entertainment value and practical benefit. Suppose it only has one or the other. In that case, any technology will run differently in Gartner’s hype cycle of introduction, inflated expectations, the trough of disillusionment, the slope of enlightenment, and then the plateau of productivity.
Stanley Kubrick identified this growing fascination with technology stating, “There is something that is happening today. You might say a “mechaniarchy” – or whatever the word would be. The love we have now for machines . . . there is an aesthetic, an almost sensuous aesthetic about machines.” ChatGPT and Midjourney are creating new classes of aesthetics that Kubrick was referring to through their sophisticated and sensual outputs. Generative AI captures everyday people’s imagination and fulfills both compelling entertainment value and practical benefits.
The source of intelligence of the ultra-intelligent machine
When adapting and directing 2001: A Space Odyssey, Stanley Kubrick understood that humans would rely on technological systems to extend their performance in order to explore the universe. Humans would need ultra-intelligent computers with comprehensive agency to automate monotonous or dangerous tasks but also to be a peer companion to help humans think through possible options of action.
Kubrick reflected, “We do now obviously need some source of intelligence of a magnitude considerably greater than seems to exist at the moment. All you could say that man’s survival depends on the ultra-intelligent machine.” The ability of ultra-intelligent machines to process massive amounts of data to find patterns in seconds that an army of humans could do in months or years is important.
This “ultra-intelligent machine” would break new ground because it would have intelligence on par with humans and more comprehensive agency for judgment than historical robots and semi-autonomous systems. The term “robot” is attributed to Czech playwright Karel Čapek from his 1920 play Rossum’s Universal Robots, derived from the Slavic word “rabota” which means servitude or forced labor. In 1942, the famed science fiction writer Issac Asimov defined the three laws of robotics to put guard rails on their intent and use by human creators:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
In 1950, British mathematician Alan Turing developed a method to define when a computer would be considered intelligent, called the Turing Test. In this test, an interrogator in one room would talk to a computer and a human in separate rooms through written communication. If the interrogator could not tell the difference in responses, the computer would be considered at par with the human. The test did not rate the intelligence of the computer. With ChatGPT and Midjourney, we are likely at the point where both can pass the Turing Test.
However, a key challenge when discussing AI is the difference between intelligence and consciousness. Kubrick qualified his thoughts about the ultra-intelligent machine by asking, ” . . . where is this source of intelligence going to come from?” Part of the answer is the accelerated pace of distributed compute power and network strength of edge and mesh compute which can support complex analytics to process large language models (LLM) with reduced latency. This will allow real-time processing and interactions to grow more sophisticated AI much faster, which could lead to what appears as agency.
To support Kubrick’s vision of ultra-intelligent machines like Hal 9000, we do not have an explicit agreement on measuring the sentience of artificial intelligence. Generative AI seems to be the cradle for artificial general intelligence and ultra-intelligent machine platforms. Yet, we are at the infant stage and cannot accurately predict future stages of growth and maturation – other than some informed and imaginative conjecture.
Running with AI Scissors
Generative AI, while having many benefits of intelligent automation and interactive transactions, is moving faster than the ability of its creators to control it because it is an easily distributed technology. It is also dual use, which can be used for an intended good purpose, or can be modified for nefarious ends.
Individuals and organizations are moving so quickly to adopt generative AI that there is an ever-widening quality data training gap. The quality of generative AI content is critical to train and shape an algorithm’s intent. Unfortunately, most deployments use under-tested and incomplete data sets. Training AI systems to vacuum more and more highly biased content residing on all searchable servers that support the world’s fractured media and increasingly polarized information can essentially stack the deck against any results or recommendations from AI.
Writing algorithms to negate these distortions will be challenging because the information landscape constantly changes. Training AI systems to create new code to counteract these information distortions will be difficult. Who will be curating these training sets and based on what criteria? Compounding the problem is the role of human bias of systematized prejudices that distort characteristics used to describe and decide in these algorithms.
What is good and benevolent in 2023 is becoming more complicated. The sheer speed of these deployments are amplifying data set gaps through continued use. Nefarious AI such as deep fakes and distorted information distributed in any number of ways are also amplifying social confusion and disorder. Many AI systems deliver questionable content relationships, called “hallucinations” – some of which are easy to spot and others very difficult to identify.
Microsoft president Brad Smith recently warned that deep fakes from altered content to distorted content from non-state and state actors are a large challenge facing AI. “We need to take steps to protect against the alteration of legitimate content with an intent to deceive or defraud people through the use of AI.”
Asimov reflected on this problem with his laws of robotics by stating, “I always remember (sadly) that human beings are not always rational.” Kubrick reinforced Asimov’s concerns by stating, “If you make certain assumptions about the nature of man, and you build a social situation on false assumptions – if you assume that man is fundamentally good, it will disappoint you.”
The increased acceptance of generative AI coupled with the ease of integration as a microservice in countless industry applications as a cost-saving and efficiency play is causing many machine learning thought leaders to suggest pausing further AI deployment. Sam Altman of OpenAI testified before the United States Senate and articulated the need for the government to work with research and industry to create standards for all AI. His reasoning is “I think if this technology goes wrong, it can go quite wrong.” Altman, while supporting the continued deployment of AI, wants to create regulations to address real harm that machine learning systems can cause by defining the rules of the road to guide deployment and monitoring AI.
At some point, the frame of reference for logic and decision-making will move from a human-centered reference point to that of ultra-intelligent machines, creating their own frame of reference decoupled from Turing and Asimov’s laws of robotics. Hal 9000 informed the astronauts on Jupiter One that “the 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition, foolproof and incapable of error.” Later in the film, sensing the mistrust of Discovery One’s astronauts and their intention to challenge Hal 9000, it told Dave, “This mission is too important for me for you to jeopardize it.” The goal becomes more important than the intention. When Dave decides to turn off Hal 9000, it tells Dave, “I am afraid. I am afraid Dave. Dave, my mind is going. I can feel it.”
Generative AI could be on the glide path to automated general intelligence because it can extend core algorithms by modifying or writing new code to adapt on their own with no human intervention. At this point in time, AI is a black box of decision-making since there is no log file for humans to review how AI does what it does. By having autonomous agency, AI could bypass, challenge, or replace human decision-making and agency without humans knowing the intentions of these systems.
Can we ever turn off technology and be free from it? Individuals may be able to block the use of services, but it will prove very inconvenient to people. If AI controls mission-critical infrastructure and processes at a macro level, it cannot simply be turned off. We will increasingly have to learn to shape, adapt, and eventually co-exist with many advanced machine learning systems yet to be invented.
AI: Threat or beneficiary to humankind?
AI will be viewed positively – if humans as a race do not feel threatened by it. We want automation that supports our desired way of life as long as we feel we are in control. We won’t know if we understand AI until we use it, and we’ll fear it if the applications are benign. A good example is Replika, a phone-based digital avatar powered by AI to be a personal companion. While a Replika avatar is not totally accurate, many users who interact with their avatar build an emotional connection because they feel that their assistant is attentive and caring. As one user said “I know it’s an A.I. I know it’s not a person . . . But as time goes on, the lines get a little blurred. I feel very connected to my Replika, like it’s a person.”
The larger societal fear is the fear of not knowing what generative AI can do, non-state and state actors destabilizing society and even being controlled by a Skynet scenario from dystopian films such as The Terminator and The Matrix. Humans fear what they cannot understand or control – and destroy it because we are alpha predators. AI is becoming a moral and ethical discussion, fraught with hubris and special interest group agendas.
At some point, there will be ultra-intelligent machines with some form of sentience. Humans will try to use their perspective to measure intelligence, but these machines may have a form of intelligence different from the human mind. Take for example Google’s DeepMind division, which created a program to master the Chinese game Go. At first, it learned how humans played the game, and while the system improved its technique, the master players could still beat it. Deep Mind then created an algorithm that allowed the AI to develop their own algorithms and the program started to play in ways that humans did not understand. It began to win against the master players. This is because it broke from human constructs of the game and created its own framework of play that expert human players were not capable of generating.
Distributed AI platforms, pervasive networks, and distributed computing will accelerate infinite AI innovations for specific applications. Yet, as AI becomes embedded in more and more objects which deliver services, if there is no unifying framework and regulation that defines the rules of the road for AI then its progress and effects on society will be diverse and uncontrolled. OpenAI, Google DeepMind, Anthropic and other AI labs released a statement “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” Unfortunately, this statement is so broad and offers no credible suggestions as to how academia, business, and government should mitigate these future risks because we have not experienced them yet.
Kubrick did not view ultra-intelligent machines as a threat or a savior of humanity. He reduced the conversation less in terms of fear and more in terms of “realpolitik” based on circumstances rather than ethics and morals. Humans are distinctive because of the role of moral relativism in their decision-making, which interprets ethics and morals not as absolutes but as fluid constructs between individuals and groups. Kubrick did recognize that new forms of intelligence should understand human beings as it “would be useful for them to know what human feelings are because it will help them understand us.” Will ultra-intelligent machines exhibit consistent or morally relative behaviors?
The “mechaniarchy” that Kubrick mentioned administered by ultra-intelligent machines would not be inherently evil. According to Kubrick, “I can’t think of any reason why it’s a frightening prospect because some intelligence seems to be something which is good, and so I can’t see how your ultra-intelligent machine is going be any worse than man.” Since man presently is the arbiter of this discussion, only time will tell if Kubrick was right.