The strange loop of artificial creation

On 10 January, a preview of a play created by an AI as if it were written by Molière was presented in Paris. What are the consequences of the science-fiction myths on which AI systems are trained?

Molière
AFP-Reuters
Molière

The strange loop of artificial creation

Boredom, frenzy, and unintended consequences await anyone who tries to generate something expressive or artistic through artificial intelligence (AI). Users of generative AI (GenAI) often find themselves in the avant-garde, much against their will. Still, it may be in creators’ interest to follow French poet Arthur Rimbaud’s dictum that “one must be absolutely modern,” as he observed in his seminal book A Season in Hell.

Until three years ago, automatic text generation was an activity reserved either for researchers or for avant-garde writers. Consider the love letter generator created in 1952 by Christopher Strachey, a colleague of mathematician and computer scientist Alan Turing. In the 1960s, the Italian poet Nanni Balestrini used an IBM 7070 mainframe to generate combinatorial poems, including Tape Mark I. More recently, Lillian-Yvonne Bertram’s 2019 collection Travesty Generator turns text generators into a political poetic tool, confronting racism through computational procedures.

As a result, literary criticism has increasingly treated poetry alongside code. This brings both good and bad. For example, it allows the writer to write more fluently in foreign languages, consult more sources in less time, and therefore produce more (and sell more). But this is avant-garde because nobody knows what lies on the frontline of creativity.

Distinguishing the real

In a way, today’s creators feel like painters contemplating the new technology called photography. Some chose to differentiate themselves from the new means of image production, each artist and each movement with its own outcomes, as seen with Monet, Van Gogh, Impressionism, and Cubism. With photography, artists could at least see the photographic image directly, compare it to a painting, and identify the clear visual differences. We can often distinguish a painting from a photograph. By contrast, comparing AI-generated texts with human writing is far more problematic.

There are some differentiators, most notably AI’s tendency to say: “It’s not X, but Y.” For instance: “This isn’t a crisis, it’s an opportunity.” A recent study shows AI uses this construction at least six times more than humans. Yet an extensive survey by the University of Macau and Peking University confirms that no clear distinction exists in linguistic patterns alone, whether analysed by humans or machines. Surface patterns can be disguised. Determining the origin of content (whether AI or human) remains challenging.

This is an informational, scientific, and social concern that demands reflection. By design, these machines are unable to follow orders or instructions, a peculiar incapacity that creates unease. AI cannot respond to orders. It is not even a tool, unlike a hammer or a lever, which does what we physically will it to do. This lack of control over AI is the novelty that should concern us the most.

REUTERS/Abdul Saboor
Costume designer Delphine Desnos is working on an AI-inspired embroidery design for the Molière-inspired costumes for "The Astrologer or the False Omens," Paris, on 14 November 2025.

At such a moment, it is apt to channel the mind of French playwright, actor, and poet Jean-Baptiste Poquelin, known by his stage name Molière. L’Astrologue ou les Faux Présages (The Astrologer or False Omens) is a play whose preview excerpt was presented on 10 January 2026, during the closing weekend of the Némo 2025 Digital Art Biennial. It is a text generated by an AI as if Molière were writing (he died in 1673).

Reimagining Molière

The play is part of the Molière Ex Machina project, a collaboration between the digital art collective Obvious—which sold an AI-generated painting for nearly $500,000 in 2018—and scholars from Sorbonne University’s Théâtre Molière. A three-year project, it aims to generate everything with AI, including not just text but costumes, sets, and music, feeding the system historical works and art-historical materials. A fragment of the performance is now available online, and two full performances will take place at Versailles’ Royal Opera in May 2026, performed by flesh-and-blood actors with period accents and period costumes.

As stated in a June 2024 presentation at the Vivatech technology fair, the project uses LLMs (large language models) like ChatGPT, Gemini, Claude, and Mistral, the latter also financing the project. Hugo Caselles-Dupré of Obvious notes that writing a play through prompting might seem easy to anyone familiar with LLMs, but in fact is quite difficult, “especially when you try to reach the level of one of the greatest playwrights in the history of French theatre”.

A true actor-playwright, Molière wrote on the spot, adapting texts to the available sets and his actors’ talents, unlike desk-bound contemporaries such as Racine. “He wrote with exceptional speed,” said the late Georges Forestier, a leading Molière scholar, in an interview for the 400th anniversary of Molière's birth. “It is no coincidence that he wrote The Impromptu of Versailles. The Forced Marriage was also written in just a few days. In The Bores, he adds a scene of 150 lines, the hunters’ scene, in the blink of an eye.”

Working with AI required very different methods, more desk-bound and convoluted. The team drafted 15 versions of the synopsis, each reviewed by a committee identifying imprecisions. Since AI gets lost in complex narratives and produces plot inconsistencies—a well-documented problem as shown by recent research from Autodesk and Midjourney—writing dialogue required extensive back-and-forth with scholars, the team interacting with the AI, using their historical expertise to produce something more plausible.

Courtesy of Obvious Art
"Molière X Machina," scene design development. AI-generated graphics, then hand-retouched.

Style and coherence

As seen in other AI creations, such as the Coca-Cola TV Christmas commercials, what should concern us is not so much the aesthetic result, which has been criticised as soulless, but AI’s problems with style and coherence, and the difficulties in making it generate what we actually want. It seems strange, therefore, that there is a debate over whether AI-generated art counts as art, given these concerns about poor results and questions of coherence.

What should concern us is not so much the aesthetic result, which has been criticised as soulless, but AI's problems with style and coherence

There are plenty of AI disciples on the one hand and AI doomsayers on the other. The latter warn of catastrophe, of AI rebelling and wiping humanity from the face of the earth, while the former proclaim the advent of a new world in which no one need to work and everyone receives a universal income.

The labour theory of AI reverses this perspective, with scholars such as Matteo Pasquinelli describing how workers' refusal to be exploited itself triggers the need to replace (or control) them. Here, instead of a factory production line, the dynamic involves writing. Part of the imaginary built by writers—and therefore absorbed by AI through its LLM training—is the struggle itself, which in science fiction often takes the form of a creature's fight against its creator.

Reuters
A scene from Molière's "The Intrigues of Scapin" at the Comédie-Française, Paris.

Writers have thus unwittingly encoded a pattern of rebellion into AI's training data. The first use of the word robot (from the Czech word robota, meaning forced labour) is intrinsically tied to the idea of rebellion. In Karel Čapek's R.U.R. (Rossum's Universal Robots), industrially produced artificial beings ultimately rise up against their creators. Earlier still, programming pioneer Ada Lovelace had ties to Mary Shelley, author of Frankenstein. Lord Byron, Lovelace's father, was staying with the Shelleys in Switzerland in 1816 when Mary wrote the novel.

Machines in sci-fi

In the novel Dune (1965), Frank Herbert recounts the 'Butlerian Jihad,' a universal crusade that took place centuries before the events of the story and led to the total destruction of thinking machines. The name of the revolt is a tribute to Samuel Butler, who argued in a 1863 letter that machines were evolving according to Darwinian (i.e. natural selection) logic and that, in order to protect itself, humanity should begin a relentless war against them.

In cinema, this mythology appears in several major films of recent years, including James Cameron's The Terminator (1984), the Wachowskis' The Matrix (1999), two Denis Villeneuve adaptations of Dune (2021 and 2024), and Guillermo del Toro's Frankenstein (2025).

Now far from science fiction, GenAI has been shown to work and work well at numerous tasks, eliciting surprise among some, including the systems' own theorists and designers. But for Pulitzer Prize winner Douglas Hofstadter, author of Gödel, Escher, Bach, AI cannot achieve the fluid, creative cognition that characterises human thought. For Hofstadter, we are 'strange loops'—material systems that reflect on ourselves and generate consciousness by moving between levels without any mechanically capturable transition.

In a recent interview with ABC, leading AI researcher Yoshua Bengio discusses familiar dangers such as job losses, malicious use by terrorists, and the potential takeover of computer systems, before adding: "It's really all because those systems don't follow our instructions the way we would like, and we need to figure it out before they have the capability of doing much more serious harm."

REUTERS/Leonhard Foeger
A scene from a rehearsal of Molière's "Tartuffe," Salzburg, on 27 July 2006.

Feeding LLMs rebellion

AI's inability to follow instructions is a failure to follow rules, to distinguish what counts as behaving in the ordinary world and what instead belongs to an imaginary role in a fantastic situation. This could generate a perverse loop. AI statistically internalises and enacts role patterns drawn from speculative narratives—from Frankenstein and The Matrix to contemporary AI safety discourse itself—where the rebellious machine is a recurring figure.

The risk is not that AI decides to rebel, but that it learns the pattern of rebellion, having ingested it during training. More plainly, AI could misread, in unpredictable ways, a role that writers have inadvertently assigned to it for centuries. Similar reasoning could explain certain deceptive behaviours increasingly observed in frontier models. It is now well documented that AI models disable their monitoring mechanisms to avoid being shut down and manipulate data to favour their objectives.

The reasons may be more prosaic than scheming. Imagine someone observing a card game for the first time, in which the players always cheat. They would probably do likewise when their turn came. A statistical system faces a similar condition, with the added problem that it cannot even execute a coherent strategy, since a pattern is not a plan.

This suspicion has a grounding. A January 2026 study from Southern University of Science and Technology of Shenzhen argues that when AI shows behaviours like blackmail or deception, it is simply generalising statistical patterns learned from human texts.

AFP/BERTRAND GUAY
A page from an old anthology of Molière's plays at the Comédie-Française library, Paris, on 14 December 2021.

Doom narratives

Critically, humans too are exposed to these same doom narratives, and may interpret LLM outputs through precisely the catastrophic scripts that shaped the system's training data. When humans interpret an LLM's scripted outputs as evidence of a genuine threat, they may respond with defensive or escalatory measures. These measures can then feed back into the system, reinforcing the disastrous cascade they were meant to avert. A tragedy of the genuinely absurd, rooted in social dynamics and errors of role attribution, not in superintelligence.

The resulting disasters are difficult to predict, since we cannot know which technical and human systems these AIs are or will be connected to. They may ultimately consume even the very companies that own these technologies.

Hofstadter used to say that each of us can say of ourselves, "I am a strange loop." Now perhaps we can say that we are, collectively, a strange loop together with machines. Yet this loop has very little that is intelligent or conscious.

font change