The Hypernormalisation of AI

The hypernormalisation of AI is a complex and multifaceted issue. Yes, AI has the potential to revolutionize industries and create new job opportunities. However, it's important to separate fact from fiction and recognize the limitations of LLM behaviour.

SAM ALTMANARTIFICIAL INTELLIGENCEARTIFICIAL GENERAL INTELLIGENCEHYPERNORMALISATIONAGI OPENAIAGI VS AICLAUDE AI

3/31/20244 min read

The Hypernormalisation of AI

The term hypernormalisation was coined by Russian-born Berkeley professor Alexei Yurchak to describe the dying Soviet Union in the mid-1970s. It refers to a situation where both the people and the government jointly agree to pretend normal in the face of a failing system. British documentarian Adam Curtis reintroduced the term through his cinematic disquisition: Hypernormalisation, which visualizes the dystopia of cyberspace and political manipulation in the 21st century.

What is AI Hypernormalisation in 2024?

AI Hypernormalisation in 2024 refers to the increasing sense of powerlessness and disillusion in the modern world, where global technological companies try to normalize the behavior of our generation through data mining and Big Data. This results in the creation of a disillusioned generation of millennials, the rise of demagogues, the creation of post-truths, and the vacuous narcissism of cyberspace. However, it is important to separate fact from fiction when it comes to AI and its impact on the job market.

Why Agencies like Gartner predict AI Doom Cycles?

Historically, computer spreadsheets and word processors barely affected the industries they revolutionized. In fact, more jobs are created than lost in almost every case of supposed industry upheaval. Raw AI content is not catching on, and if AI was going to replace jobs, it would be happening already. The Industrial Revolution brought about significant changes in the way we live and work, and AI has the potential to revolutionize industries and create new job opportunities. However, we need to be cautious of the hypernormalisation of AI and ensure that it is used ethically and responsibly.

Agencies like Gartner predict AI doom, with alarming predictions for GenAI-generated content and marketing cycles. In 2025, they predict a decline in social media quality, forcing customers to limit their usage. In 2026, brands will use senior creative roles to differentiate themselves, while CMOs will use specific technologies to protect their brands from GenAI-driven deception. Even more concerning, in 2027, 20% of brands will position and differentiate themselves as "AI-free." These predictions point to the need for tech authenticity and the dangers of letting the hype cycle drive the industry.

Why Claude-3 beats average human IQ but fails at mundane tasks?

The hypernormalisation of AI is a real concern. But it is important to separate fact from fiction. AI has the potential to revolutionize industries and create new job opportunities, but we need to ensure that it is used ethically and responsibly. We must be cautious of the dangers of letting the hype cycle drive the industry and strive for tech authenticity. The recent release of Claude-3, a Large Language Model that beats the average human IQ, is impressive, but it is important to understand its limitations and capabilities. It can do economic modeling, ML experimentation, understands science, writes extremely well, solves PhD level problems, create new quantum algorithms, and can write really good code. However, it also fails at much more mundane tasks, such as solving crosswords, creating word grids, playing wordle, and solving mazes. Therefore, it is important to consider whether AI is truly intelligent or just something else.

The Hypernormalisation of AI: A New Era of Intelligence or Just Stochastic Parrots?

As we delve deeper into the world of AI and its capabilities, it becomes clear that the concept of intelligence as we know it may not be the right description for LLM behaviour. Despite being able to perform tasks that would require extraordinary intelligence for a human, such as speaking Circassian or solving quantum physics PhD problems, LLMs cannot solve problems that even mice can do in experiments, let alone games that young children play.

A Word of Warning: The Limitations of AI's Intelligence and the Risks of Over-Extrapolation

This raises the question: what is intelligence, really? Is it simply the ability to perform certain tasks, or is there more to it than that? The theory of special intelligence suggests that LLMs have a distilled knowledge of everything we've produced, creating an amalgam that is specially intelligent. It can connect insights from any domain and give us predictions of what we might say if we had thought to connect those tokens together. However, this intelligence is not general, and it cannot extrapolate beyond the boundaries of what it has already learned.

It's important to remember that LLMs are not like us. They have been trained on more information than a human being could ever hope to see in a lifetime. While we might take certain qualities as indications of intelligence, such as quoting Cicero or knowing mental maths, LLMs do not have the same correlated signals between knowledge and intelligence. They are not conscious beings, and they do not have the same inner phenomenology that we do.

LLMS: To Anthropomorphize or To Anthropomorphize or Avoid?

So, what does this mean for the future of AI? Will we continue to anthropomorphize LLM output and ascribe intelligence to them, or will we recognize that they have a different quality altogether? As we continue to develop and refine LLMs, it's crucial that we consider these questions and strive for tech authenticity. We must be cautious of the dangers of letting the hype cycle drive the industry and recognize the limitations of AI, even as we marvel at its incredible abilities.

In conclusion, the hypernormalisation of AI is a complex and multifaceted issue. While AI has the potential to revolutionize industries and create new job opportunities, it's important to separate fact from fiction and recognize the limitations of LLM behaviour. We must strive for tech authenticity and be cautious of the dangers of letting the hype cycle drive the industry