The future of the future : Artificial intelligence, humanity, and design adaptation

2025 is proving to be a banner year for groundbreaking advancements in artificial intelligence, yielding highly utilitarian and valuable outcomes. Geoffrey Hinton, a major contributor to the development of artificial intelligence, predicted that a superintelligent AI, called “Artificial superintelligence (ASI),” could take twenty years to achieve. He now reflects that the current pace of maturation of AI systems is exceeding his expectations and has shortened ASI to ten years. 

AI, or machine learning, is a true transformational technology because it is changing our fundamental thinking on such a broad scale. AI is accelerating our ability to achieve more in less time and iterate much faster than previously possible, with greater economic efficiency. It is also raising key questions about the nature of intelligence and creativity, as well as how inherent human characteristics, such as thought, self-determination, and judgment, are quickly emerging in self-organizing digital systems. 

Humanity in general, and designers in particular, are rightly concerned about the pace and performance of AI systems and feel vulnerable to this black box technology because it is starting to have real-world impacts on individual agency, employment stability, and wage growth.

Many are driven by fear and want to reject AI outright. Others want to find a way to integrate human creative and judgment abilities and use AI to supercharge possible outcomes. Whichever scenario prevails, one thing is sure: the cone of uncertainty is very large because we have little historical experience with AI. The rise of AI is occurring within a global backdrop of governmental and technocratic tumult, uncertainty, fear, and division.

This article examines the challenges to neoclassical economic theory, the nature of labor and output, and how AI will fundamentally alter the role and activities of designers. Much of today’s design skills and activities will be automated. Design communities will need to evaluate and modify their reliance on traditional emphasis on creativity, aesthetic utility, and older digital production processes, and shift to becoming orchestrators of human-machine learning collaborations. A small number of designers will move upstream to more strategic activities that, at present, are difficult to automate and become active shapers of future AI capabilities. 

 

Past projections of future possibilities for a better future

John Maynard Keynes, the world-renowned British economist, challenged established neoclassical economic thinking by proposing that total economic spending would determine economic activity and, therefore, employment. Amid the Great Depression, he published a short article in 1930 called “Economic Possibilities for Our Grandchildren” which linked human progress to controlling global population growth, avoiding costly wars, investing in scientific advancements, and balancing production and consumption rates. 

Unfortunately for Keynes, his proposed benefits for 2030 in reality were uneven:

  • Controlling human population: Between 1930 and 2025, the world population quadrupled from 2 billion to 8 billion people – a 400% increase – due in part to improvements in healthcare, sanitation, and standards of living.
  • Avoiding costly wars: Between 1930 and 2025, about 6–8% of the world’s accumulated wealth ($70–90 trillion in today’s dollars) was spent on wars, and a loss of $65–95 trillion of economic output. This amounts to approximately $180 trillion in constant 2020 dollars.
  • Investing profits in scientific advancements: Between 1930 and 2025, about 45–55% of economic output growth is attributable to scientific advancements.
  • Balancing the rate of production and consumption: Between 1930 and 2025, world production (GDP) increased about 21-fold or at about 3.1% per year in real inflation-adjusted terms. World consumption increased 23-fold in absolute terms or at about 3.2% per year (GDP and consumption are closely linked as supply and demand curves drive one another). The aggregate inflation rate since 2000 has, on average, remained about 3% per year, which would double the cost of an item in 25 years.

Keynes proposed that with technical advancements, the need for traditional labor pools would contract, allowing people to ” . . . do more things for ourselves than is usual with the rich today, only too glad to have minor duties and tasks and routines. But beyond this, we shall endeavour to spread the bread thin on the butter-to make what work there is still to be done to be as widely shared as possible. Three-hour shifts or a fifteen-hour week may put off the problem for a great while.”

Instead of 40-hour workweeks (a product of Victorian social justice efforts), he predicted there would be 15-hour workweeks, and the standard of living by 2030 would be four to eight times greater than in 1930. In 2025, the definition of a workweek is under debate, primarily due to the pandemic and the rise of hybrid work. Traditional 40-hour workweeks are being questioned, and some companies are experimenting with compressed four-day workweeks, as well as replacing hours with weekly goals.

Rapid advancements in technology have created incredible benefits for markets and people, giving rise to whole new industries and revitalizing legacy ones. A key Keynesian prediction of an increased aggregate standard of living by 2030, which would be four to eight times greater, has come true.

Global aggregate GDP, life expectancy, literacy rates, access to electricity, and, more importantly, personal technology have supercharged the international standard of living by a factor of six. Unfortunately, these benefits have been unevenly distributed, mainly to the global North. However, even in the global South, there have been undeniable increases in standards of living.

It is within this backdrop of a century of progress that we have entered an age of continual VUCA. The 21st century, which began with the end of the costly Cold War and the Y2K bug, was followed by a series of even more expensive wars, economic meltdowns, a crisis in democratic governance, and a global pandemic. All of these factors directly challenge the foundational tenets of the post-World War II international order, based on multilateral economic cooperation, economic stability, and prosperity. All that we took for granted and never really thought about is now up for grabs. This is causing a lack of trust in governments, divided societies driven by unfettered social media turning people against each other, and a yearning for muscular state sovereignty to insulate and protect native populations from global integration. 

Historically, technological advancements have disproportionately benefited wealthier and more educated citizens, as increased automation has led to higher productivity, reduced human error rates, and fewer production defects. In the context of the design landscape, AI advancement is happening so quickly that professional organizations, communities, educational programs, and practitioners have not been able to develop a coherent perspective on how AI and design can collaborate to support organizations in delivering value to their markets.

Design is not alone. Many professions are grappling with how to integrate AI skills and outcomes without commoditizing core professional activities and processes, which are often targeted by new technologies for automation to maximize value and return on investment. Companies rely on fewer workers to produce more with greater economic output and profit margins. Displaced workers have fewer employment options, and their future financial prospects are uncertain due to decreased wages for unskilled physical jobs in the gig economy, which are becoming increasingly automated. 

The management classes (white-collar jobs) that require college and technical education for work in the knowledge economy were often insulated from technological advancements due to a reliance on human judgment that could not be automated. Currently, major decisions frequently require human judgment, and the white-collar sector benefits from increased economic prosperity and higher personal earnings. With the rapid advancements in artificial intelligence, white-collar jobs are now at risk of being automated, leading to a decrease in the white-collar populations. A significant portion of design is considered a professional white-collar activity, with education and professional roles centered on intellectual capital, creativity, and implementation.

These factors have, and will continue to, impact the practice of design by design practitioners on two levels. First, the act of designing is based on trust, as it is a collaborative and disruptive process that challenges the status quo to create improved value and increase the agency of people. If trust is not present, people will be reluctant to cooperate in a convivial atmosphere. Second, AI systems are automating much of what design practitioners do, from a creative and implementation standpoint, by providing everyday people with the ability to iterate on AI-generated visualizations, which are also implemented simultaneously.

 

AI leaders desire for “realistic optimism”

Silicon Valley, working in consort with academia, has iterated its way to creating utilitarian AI systems. Google created Google Brain in 2011, then acquired DeepMind in 2014 to work on artificial general intelligence (AGI) and open-sourced TensorFlow AI framework. OpenAI was founded in 2015 as a nonprofit research lab to “ensure that artificial general intelligence (AGI) benefits all of humanity” and to prevent any single company or government from controlling AGI. 

Mary Meeker, a venture capitalist, released her yearly internet trends report. This year, it focuses on AI, and the accelerated adoption demonstrates how this technology is the fastest-moving technological transformation ever. AI reached global user distribution in three years, a milestone that took the Internet 23 years to achieve. It is also the most rapid transformation in knowledge distribution since the advent of the printing press. The most immediate impact is on everyday operations and productivity. Still, it will expand into more sophisticated applications, leading to the creation of a new generation of AI Internet of Things. Lastly, AI computation demands are doubling every 6–10 months, which is fueling the construction of data centers and new energy demands. 

As OpenAI made progress, its core values were put to the test, and internal battles within the company to control it, as well as the shift from research to rapid commercialization with Microsoft, have now created a highly competitive and iterative AI landscape.

Now, most major technology companies are competing to develop large language model platforms that various industries can easily integrate to create immediate value. Google’s Gemini, Microsoft’s Copilot, Leta’s Lama, IBM’s WatsonX, Amazon’s Bedrock & SageMaker, and niche players such as Perplexity, DeepSeek, and Anthropic’s Claude are quickly deploying systems. 

Dario Amodei, CEO of Anthropic, was recently interviewed by AXIOS in a provocative article titled “Behind the Curtain: A white-collar bloodbath.” The big takeaway from the article that made headlines was the prediction that “AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years.” Current society that has grown accustomed to more frequent change, cannot adapt from using AI as an augmentative collaborator to one where AI agents could replace human worker involvement at scale in five years. 

John Maynard Keynes reflected that from the birth of Christ to 1700, there was a slow rate of progress that accelerated in the Renaissance which ” . . . was due to two reasons – to the remarkable absence of important technical improvements and to the failure of capital to accumulate.” In 1930, he hypothesized that “In quite a few years-in our own lifetimes, I mean-we may be able to perform all the operations of agriculture, mining, and manufacture with a quarter of the human effort to which we have been accustomed.” 

Dario Amodei from Anthropic stated “We, as the producers of this technology, have a duty and an obligation to be honest about what is coming,” Even Keynes knew that when he noted “unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour. But this is only a temporary phase of maladjustment.” No one currently knows how long the immediate future “phase of maladjustment” of AI expansion will be.

Aneesh Raman at LinkedIn wrote an OpEd in the New York Times called “I’m a LinkedIn Executive. I See the Bottom Rung of the Career Ladder Breaking.” He reflects that all technologies have historically displaced workers, and that these same workers have retrained for new roles. But AI is being deployed so quickly and is maturing to outperform entry-level job tasks, which are breaking “the bottom rungs of the career ladder — junior software developers … junior paralegals and first-year law-firm associates “

OpenAI CEO Sam Altman has proposed a “realistic optimism,” acknowledging the risks associated with developing and deploying AI while maintaining an optimistic outlook about its benefits. Idealism and realism are often challenging to balance, and the rapid advancement and deployment of AI are outpacing humans’ abilities to connect the two, as AI is inherently disruptive, unpredictable, and dual-use. AI systems can accelerate advancements in solving complex problems and refine solutions by iterating options, supercharging productivity at a rate that humans alone could not achieve. These same systems can learn and adapt at rates that humans cannot, and AI capabilities can accumulate, potentially competing with and replacing human labor and production abilities. What many politicians and business leaders refuse to discuss or contemplate is mass displacement of workers, or a possible drastic reduction in wages due to accelerated AI/AGI deployments. 

Amodei ends the AXIOS interview by stating, “You can’t just step in front of the train and stop it… The only move that’s going to work is steering the train — steer it 10 degrees in a different direction from where it was going. That can be done. That’s possible, but we have to do it now.” While the United States has no singular AI policy and is resistant to creating one, the European Union has just passed its first AI policy ” …to create a trustworthy environment for AI, promoting human dignity and rights while fostering innovation in the field.” It establishes governance on general-purpose AI models, data quality and transparency, initial prohibitions, and defines risk levels (high and unacceptable), which can inform specific regulations. 

 

Designing a view on designers using and shaping AI

Design communities have historically been among the first to adopt many production technologies. Since the modern design era began in the 1850s, design, and production, have been closely intertwined to meet commercial and market goals from exploiting the possibilities of mechanical looms, production-grade lithography, use of photographic technologies, exploiting machining and molding technologies, time-based advancements in film and video, and experimenting with new production processes. Designers are naturally curious makers and, as such, have always sought ways to combine creativity, innovation, ingenuity, and production into a virtuous cycle of creative destruction. 

When the personal computer became a mainstream technology in the 1990s, desktop publishing was disruptive to design because software democratized publishing, allowing citizens to create and self-publish printed materials. At that time, design communities were very concerned about their role in an uncertain environment. Design professions adapted by either becoming more efficient and productive using the same technologies or transitioning into other practice areas that could not be commoditized. 

Since the 1990s, designers have had varying degrees of impact in shaping technology and designing competitive outcomes for markets. In each of these examples, design communities felt threatened and had to adapt to known and unknown conditions to remain relevant to clients and markets. Designers’ strengths in creativity, concept development, and production were still valued, but the operating environment shifted to new processes and outcomes. The dramatic growth of “UI/UX” professionals shifted the center of gravity in design practice, and this growth had a halo effect on other design professions, prompting them to integrate physical and digital elements into an integrated whole. 

When internet technologies in the 2000s became more utilitarian, there was a demand for easier-to-consume websites. The rise of user experience and user interface designers led to the creation of bespoke websites. As these technologies matured, code and website elements became standardized and essentially automated, allowing non-designers to use these design patterns to create websites. With the rise of platforms, a series of focused capabilities, and API integrations using microservices linked with sophisticated data analytics, designers had to modify their skills and knowledge to become active contributors to product development. 

Most designers’ gateway to AI experimentation and creation is through the generation of synthetic images created via numerous free AI visualization platforms, such as MidJourney or Canva. Designers are also utilizing AI to iterate on object typologies by rapidly generating multiple options and refining them until a desired typology is achieved. These are natural entry points, as they accelerate the creative process and provide straightforward, utilitarian, and cost-effective outcomes. The challenge in designing communities is remaining users rather than shapers of the ever-increasing capabilities of AI technologies. 

Design professions should view AI as providing capabilities to supercharge creativity and rapidly iterate on variations that human designers can evaluate and refine. The community should not view speed and efficiency as a net loss, but focus on the benefits of relieving pressure on time to production, leaving more time for more thoughtful conceptual development. Instead of viewing designers and AI systems as separate transactional entities, they should be viewed as interdependent collaborators iterating from each entity’s contributions. This can be expressed in a four-square space:

If designers and design communities want to have some control over their futures in an environment where intelligent digital systems are being relied on to concept, generate, and produce value for markets, then designers will need to move past being just users of individual AI platforms and into shapers of the rationale and capabilities of these systems.

Traditional User Experience and User Interface designers, as well as many organizational collaborators, often lack the understanding and skills necessary to be a net contributor to cross-functional teams that build and deliver AI products and services. Data and the structure of data are the new skills for design. John Maeda has coined the term “computational design” that integrates computer science principles, algorithmic thinking, and code into the creative process of design. Computation has now become a creative partner, capable of generating, adapting, and optimizing designs through code, data, and logic.

This means designers, or a new type of designer, will need to be well-versed in the following AI areas:

Understanding LLMs
Using current ways content is collected and structured, large language models (LLMs) have input, embedding, transformer, and output layers that structure the specific content (called a corpus) 

Understanding AI tools
AI runs on consolidated sub-systems that an agent needs to be an effective collaborator. This includes the definition of a knowledge base (corpus), definition of an operating environment, a reasoning module, a planning module, an evaluation module, and then sensors and actuators. 

Understanding AI Microservices
All platforms are increasingly relying on API microservices to enhance the value of their products and services. They productize AI capabilities such as text classification, recommendation engines, image analysis, and many other user benefits. 

Understanding AI Infrastructure
To operationalize AI, it is essential to understand compute, network, data, model frameworks, and organizational AI governance, as these elements are crucial in designing product or service systems that rely in part or whole on AI.

 

Shifting to new design activities and recontextualizing historical ones

As this article has outlined, AI is outpacing any technology that has preceded it and is fundamentally changing skills, processes, products, and services. Because most organizations and their employees are still trying to understand and integrate AI capabilities, it is a trial-and-error process to reduce the cone of uncertainty into a plateau of productivity and continual value. There will be risks and problems associated with using AI, and those should not be understated or ignored. However, there is no going back to a world without AI. 

Some design communities believe that AI will erode human agency and creativity, systematizing them into predictable, stereotypical outcomes that non-designers would readily accept. These concerns were voiced when the personal computer and the internet were introduced. Designers did not go away but adapted to these technologies with new skills and roles (remember information architects?). This means that designers will need to equally emphasize their critical thinking skills and knowledge of AI LLMs, tools, microservices, and infrastructure, as well as their creativity, to navigate a very disruptive storm as organizations and roles undergo radical changes due to the effects of AI. 

The idiosyncratic nature of designers, whose tacit knowledge enables infinite interpretations and variations in approaches and outcomes, is necessary. Design and design communities should not view AI as a threat yet, but approach it with realistic optimism, recognizing that AI could be an effective collaborator and contributor to better design outcomes—one that human designers alone could not consistently attain if AI were not present. 



View Comments

Leave a Reply

Your email address will not be published. Required fields are marked *