Elon Musk, Grokipedia, and the battle for truth

Elon Musk’s new AI-driven encyclopaedia promises freedom from human bias. But as Grokipedia takes shape, it exposes a deeper struggle over who controls knowledge in the digital age.

Al Majalla

Elon Musk, Grokipedia, and the battle for truth

Last month, the billionaire businessman Elon Musk, famed for his SpaceX rockets and Tesla cars, launched a new project produced by artificial intelligence: Grokipedia. Developed by his company xAI, the initiative has been conceived as an AI-driven rival to Wikipedia, which Musk and other conservative activists have long accused of liberal bias.

The rollout of Grokipedia is a landmark moment for the internet, amounting to much more than a clash of two databases in an age defined by a surfeit of information and a shortage of trust. Its launch reveals two competing visions for how human knowledge should be organised in the digital age. Between them lies an unresolved question: who can get closer to the truth—humans or AI?

Wikipedia, an online open-source encyclopaedia written by human contributors, is the largest repository of thought in history. Musk’s challenge to it comes at a time when the idea of truth itself is subject to dispute.

Gatekeeper of knowledge

Two years ago, Musk launched an attack on the Wikimedia Foundation, calling on his followers to stop donating to the non-profit host of Wikipedia until it “regained neutrality.” In one of his sarcastic X (formerly Twitter) posts, he said he would donate $1bn to the foundation if it changed its name to ‘Dickipedia’.

He was shaping a much deeper narrative: the idea that AI might be the only way to restore the concept of truth in an age of bitter dispute, seeking to shift the centre of gravity away from Wikipedia’s community of volunteer writers and editors toward a silent algorithm.

Musk launched the Grok chatbot in late 2023 as an AI-based assistant. From the outset, he began hinting at a more ambitious project that would ‘redefine knowledge’. In March 2024, leaked technical reports revealed that xAI was building a massive database in collaboration with search engines and academic institutions to train a system capable of writing encyclopaedic articles in a neutral style. In July that year, Musk confirmed in a private meeting with the company’s engineers that “Grok will not only answer, but write.”

The manual testing phase of Grokipedia began in August this year and was announced to the public in late October, presented as “the free encyclopaedia without bias.”

Its entries are generated through extensive analysis of open-source data, including scientific reports, government archives, and publicly available academic materials. After the text is produced, it is reviewed by an internal system for consistency and source verification before being published.

Unlike Wikipedia, users cannot edit articles. Their only option is to report errors or provide feedback, which is later integrated into the review process. This closed structure has led critics to describe it as a ‘human-less encyclopaedia,’ while supporters see it as removing ‘human noise’ from the process of knowledge creation.

Only a few hours after the official launch, the new site’s servers went down due to heavy traffic from curious users. Social media was flooded with images of the error page, along with sarcastic comments about ‘the encyclopaedia of neutrality that began with instability.’

AFP
This image shows screens displaying the logo of Grok, an AI-generative conversational bot developed by US AI firm xAI, in Toulouse, southern France, on 15 January 2025.

Why sourcing matters

Grokipedia contained nearly 900,000 articles at its launch, mostly automated or partially copied from open databases. Users who compared Grokipedia and Wikipedia found linguistic and structural similarities across many entries, leading some to accuse the new platform of copying Wikipedia.

Musk’s team said its algorithm does not copy text but reframes it and uses broader data sources, with any overlap the result of the standardised nature of encyclopaedic writing. But some early articles were almost identical to their Wikipedia counterparts.

Grokipedia’s supporters argued that these similarities were natural in its early stages, since no new encyclopaedia could begin entirely from scratch. Its critics said Grokipedia’s AI depended on the work of Wikipedia’s human volunteers.

However, the central question remains: can AI write with neutrality, and do claims that it can overlook the fact that it relies on the very material that shaped it, including the biases embedded in that work?

The central question remains: can AI write with neutrality?

Wikipedia itself began in 2001. Its founders, Jimmy Wales and Larry Sanger, also had a revolutionary spirit and a sense of cultural revolution. The idea was to enable people everywhere to build open-source knowledge together, with freedom and popular creative control. It evolved from an earlier project called Nupedia, which failed because its expert review process was too slow. 

Wikipedia has grown into the largest knowledge bank in history, with over 60 million articles written in 300 languages. It introduced the principle of editorial consensus, whereby volunteers discuss disagreements openly on talk pages, reaching a balanced version that reflects different perspectives. 

This has left it open to accusations of bias. Each language community reflects its own political and media culture. Over time, editorial schools have emerged within Wikipedia that dominate the writing of certain topics.

It was here that Musk saw danger: neutrality turning from a guiding principle into an empty slogan. Along with several conservative voices, he argued that Wikipedia backed a mainstream narrative, particularly on issues of gender identity and politics.

Reuters
Elon Musk, CEO of SpaceX and Tesla, and owner of Twitter (X), attends the Viva Technology conference, dedicated to innovation and startups, at the Porte de Versailles exhibition centre in Paris, France.

Musk's power grab

Some believe that Grokipedia is an attempt by Musk to concentrate power. Yet criticism of Wikipedia itself, such as accusations of bias, has also come from those closely associated with it, including one of its founders.

In October, Sanger published what he called the 'Nine Theses for Fixing Wikipedia', a tacit admission from one of its founders that it had drifted from the principle of neutrality. 

He called for an end to Wikipedia's anonymity, saying its influential editors should be identified. He criticised the use of 'unreliable' sources, which he said were intended to silence conservative voices.  

Musk quickly endorsed Sanger's remarks, reposting them on X with the comment, "Neutrality can only be restored through transparency." But just days later, xAI announced that its new encyclopaedia was ready to launch.

In the first few days after its release, Grokipedia became a vast experimental space where AI met politics, ideology, and media. Within hours, the number of visitors exceeded 25 million.

Social media platforms were flooded with side-by-side images of articles sharing the same title, such as 'Gaza War 2023'. On Wikipedia, the subtitle read 'The Conflict between Israel and Hamas', while Grokipedia's version was titled 'Israeli Aggression on Gaza'. The difference appeared to be merely linguistic, yet it was enough to ignite a heated discussion about who holds the power to name and frame events.

Supporters said neutrality does not mean the absence of perspective but rather the inclusion of all sides without selective human intervention. Critics disagreed, saying machine learning cannot grasp the moral and cultural depth embedded in human language. For them, bias is not measurable in figures. It depends on an outlook shaped by history and identity.

Amid the debate and the growing rivalry between the two platforms, Grokipedia introduced a new feature called 'Chronological Perspective'. It allows readers to trace how any piece of information has evolved over time, as well as to observe how wording or references have changed through different periods.

Many hailed it as a breakthrough in cognitive transparency—a way for users to witness firsthand how facts themselves shift across the years.

Reuters
A 3D-printed miniature model of Elon Musk and the "Grok" logo

A potential 'cognitive hazard' 

The greatest challenge for Grokipedia is an ethical one. The xAI team acknowledges that Wikipedia was one of many resources used to train the system, but emphasised that "the model does not copy texts; it learns patterns from them." This raised a complex question about intellectual property in the age of AI, the boundaries around which have become blurred.

A new movement began to form among freelance writers and journalists who viewed Grokipedia as a chance to reclaim the individual's voice, in contrast to Wikipedia's anonymous collective editing model. 

Wikipedia introduced an optional feature called 'cognitive signature', which allows contributors to sign articles under their real names or pseudonyms, with links to their previous work. Writing for it shifted from a collective and anonymous effort to a personal practice rooted in digital reputation. Many saw this as a revival of the faded concept of the individual author in an era that had celebrated open collaboration.

The Grokipedia project, in contrast, is the product of Musk's philosophy on the relationship between humans and machines. It has since faced growing criticism from academic and research institutions that view it as an attempt to monopolise the process of truth-making through AI. 

It has been described as a 'potential cognitive hazard', with the warning that manipulating bias-assessment algorithms could reshape history to serve ideological or political agendas. What's more, how can a platform owned by X claim to be a neutral authority when it ultimately answers to Musk himself?

Observers warn that Grokipedia's manipulating bias-assessment algorithms could reshape history to serve ideological or political agendas

Grokipedia and the three layers of AI

In its early days, Grokipedia relied on three layers of AI. The first analysed the language and extracted concepts, the second compared them with opposing sources to measure bias, and the third produced a final text based on what it called a 'narrative balance of probability'. It constructed paragraphs that accommodated as many viewpoints as possible without clearly favouring one over another. 

Musk described the system in an interview as "an intellectual court that does not issue verdicts but simply presents evidence to the public."

Yet, as the platform gained more users, it became clear that its notion of neutrality did not align with the human understanding of the concept. Some articles felt distant and devoid of emotion, while others were so mechanical that readers began to miss Wikipedia's more human tone, with all its flaws and biases. 

The contradictions within narratives

Nonetheless, Grokipedia has exposed contradictions within historical narratives. For instance, the platform could reveal how Russian and Ukrainian accounts describe the same battle in opposite ways, offering readers a rare glimpse into the relativity of truth itself. It is also a philosophical experiment, an attempt to turn AI into what Musk called a "guardian of meaning".

In his first interviews after the launch, Musk introduced the concept of 'statistical truth', arguing that truth no longer stems from a single authority but from the quantitative aggregation of opposing perspectives. According to this logic, every opinion contains a fragment of truth, and every fact requires enough contrasting viewpoints to settle at an average value that can be considered objective. 

Several intellectuals have observed that Musk's vision is less a cognitive revolution than a digital reformulation of Gustave Le Bon's 19th-century notion of the 'collective mind'. 

Deena So'Oteh

Read more: Rapid tech innovation brings AI closer to consciousness

The project itself exposes a philosophical paradox. While it claims to eliminate human bias, it still depends on data, history, and culture, all of which are imbued with that same bias. The machine can only learn from what humans provide, and what humans provide is inevitably shaped by their subjectivities and ideologies.

This paradox has led some AI scholars to ask an unsettling question: can AI generate a new kind of automated bias, one different from but no less dangerous than human bias? 

Musk has tried to address these concerns by insisting that the system is constantly learning and self-correcting. But the fundamental question remains: who teaches the teacher?

The absence of individual responsibility

Intellectually, Grokipedia represents a shift from an 'encyclopaedia of knowledge' to an 'encyclopaedia of consciousness'. It does not simply store information but attempts to understand how beliefs are formed. 

Perhaps the most troubling aspect of Grokipedia's philosophy is the absence of individual responsibility. Wikipedia shows who edited an article, who reviewed it, and who rejected certain changes. In Grokipedia, however, AI makes these decisions based on hidden criteria. 

Neutrality here is not the result of human discussion but of closed calculations that cannot be questioned. As one expert put it: "In Wikipedia you can disagree with a person, but in Grokipedia you disagree with a machine that does not respond."

AFP
Elon Musk and the "X" logo.

Musk has often clashed with the West's media. He accuses it of institutional bias against freedom of expression. In Grokipedia, he appears to be trying to create a platform that bypasses journalists, editors, and the traditional ways of marshalling knowledge. 

Through it, Musk is positioning himself as the chief editor of the AI age. Rather than a writer or observer, he is more the architect of a new way of producing knowledge. In this, he is challenging not only Wikipedia but the broader cultural foundation of Western modernity, where, for now, humans, not algorithms, have the final say.

Musk's ambition is a step towards rethinking humanity's relationship with knowledge. It opens the door to a new phase of engineered consciousness. Grokipedia aims to structure meaning and memory.

If this pattern continues, the future may bring a troubling equation in which beliefs are shaped not in universities or newsrooms, but in the research labs of powerful corporations that control algorithms.

font change