The ‘Altman incident’ at OpenAI shows how money talks

The world watched an old-style boardroom coup and counter-coup in a cutting-edge industry that could change humanity. It ended in triumph for some familiar forces: shareholders and the profit motive.

The ‘Altman incident’ at OpenAI shows how money talks

There was a new dawn for artificial intelligence last year, and much has happened since.

AI broke out into the mainstream and created its first well-known name – ChatGPT – which shone brightly, illuminating a new future with potentially enormous benefits for humanity, alongside some significant possible pitfalls.

It left behind the confines of cutting-edge computer labs, where it was famous only among specialist programmers in obscure parts of industry or the military, to become a hot topic everywhere.

The debate on its future, over what capabilities it may reach and how it should be regulated or controlled, is still raging, alongside the surge in its popularity.

AI applications have already rolled out across sectors, stoking unprecedented concerns – across academia, the corporate world, government, and both traditional and social media.

And AI’s biggest names have tracked this stellar trajectory. ChatGPT and its non-profit corporate creator – OpenAI – have joined the galaxy of Silicon Valley stars, along with Sam Altman, its co-founder, chief executive and AI’s poster boy.

A year ago, he was an unknown figure in the business world. OpenAI had yet to change its status to a “capped-profit” company. Since then, it has attracted investment of $10bn from Microsoft in return for a weighty stake of 49%, but without a seat on the board.

Also founded by none other than Elon Musk, this landmark moment angered him, but that is another story.

AI applications have already rolled out across sectors, stoking unprecedented concerns – across academia, the corporate world, government, and both traditional and social media.

This one starts with the unintended consequences of how OpenAI's board was set up without a seat for its new big-name investor. The drama it created recently captured the headlines and then the imagination of readers and social media users around the globe via a fast-moving chain of events at the creator of the best-known chatbot.

The  "Altman Incident" reveals who – and what – is running the most important company in a vital industry that may have the power to change the world.

Board room coup

As with any tale of our times, it began to go viral by trending on social media. Altman's sudden dismissal by the four-person board stoked a slew of speculation.

Two days later, there was more breaking news, this time of Altman's appointment to Microsoft as the head of its AI operations. It was a clear vote of confidence in the man sacked by the firm co-owned by his new employer.

Then, an outcry. And not just on social media but from most of OpenAI's employees, numbering over 700. They threatened to resign if Altman was not reinstated.

It worked.

He returned safe and sound to the helm while the four directors walked the plank, resigning from the firm they had so recently shocked with a top-level mutiny.

The "Altman Incident" reveals who – and what – is running the most important company in a vital industry that may have the power to change the world.

Captivating drama

It was a drama as captivating as it was short, with enough action and intrigue to be covered beyond the business pages of news sites. But there is a longer-lasting and more significant angle to the story: Open AI's corporate governance.

The reasons behind the decision to sack Altman are unclear, besides the board saying he had been "insufficiently candid."

The company plays a vital role at the frontier of one of the most important technologies out there, which may have the power to reshape the world. That is important work, and there is no regulatory oversight yet.   

It was not the first time Altman had been fired. He was dismissed from the top job at the startup incubator, Y Combinator, by its founder, Paul Graham, who The Washington Post has described as Altman's "spiritual mentor".

There is a clear public interest in the reasoning behind Altman's latest sacking. Whatever the rights or wrongs, there should be clarity.  If viewed objectively and in the open, the board's original decision may be revealing, even if it lacked proper style and timing.

Serious problems

OpenAI's obvious corporate governance problems look serious after its rapid growth. They relate to its origins and its structure, and what it calls its "vision and mission". It is a non-profit organisation but can already generate enormous revenues and profits.

Only part of OpenAI is mandated to do that; its subsidiary develops so-called artificial general intelligence, or AGI, products. Their profits help finance the high cost of the advanced computing power it uses and the human brains behind it.

This, in effect, leaves the company's board in charge of two different goals. As well as following Open AI's  2018 founding principle – a safe AGI to benefit all of humanity –  it must also protect the interest of investors, for whom profit is a crucial measure of success.

The massive popularity of ChatGPT has put the company within reach of huge profits. Microsoft's arrival – keeping its stake under the 50% line to manage antitrust scrutiny while enough for access to OpenAI intellectual property and rights related to the development of AI products – shows that the pursuit of profit has become blatant and relentless.

Altman has not tried to hide this, stoking fears across the industry. Major tech companies have expressed apprehension over the implications of the profit motive driving the development of AI before complete regulatory oversight has developed.

 Major tech companies have expressed apprehension over the implications of the profit motive driving the development of AI before complete regulatory oversight has developed.

B-Corp potential

Meanwhile, there is an immediately available alternative for OpenAI, under the banner of the "B-Corp" system that provides governance criteria and oversight for companies that see themselves as "Benefit Corporations".

Firms certified as B-Corps must meet open, transparent and verified standards regarding how they benefit people over and above the returns they generate for shareholders and environmental and social benchmarks.

This kind of precise accountability is needed in the tech sector — especially in AI. It should also extend to investors. In OpenAI's case, that means Microsoft, which was, in turn, one of the original start-ups-turned-giants of the computer age.

Microsoft already operates under a similar framework, ESG, with the initials covering environmental, social and governance matters. It also has the clearly defined responsibilities of a public company. This helps its stewardship of the public interest over AI matters.

However, ESG criteria seem subservient to the profit motive, which looks like the primary driver of Microsoft's behaviour in buying into OpenAI and over Altman's dramatic sacking and reinstatement. Profits are now driving the rapid pace at which it is advancing AI product development.

What is Microsoft betting on? Is it a tech-sector friendship, as Altman has described it? Or is Microsoft setting its sights on making the money that will help it win the race against the other giants in this frantic arena, Google, Meta, and X Corp?

The Altman incident showed that shareholders have the final say at OpenAI – and in capitalism – money talks. And sometimes, they don't even need a single seat on the board to exert this control.

Firms certified as B-Corps must meet open, transparent and verified standards above the returns they generate for shareholders. This kind of precise accountability is needed in the tech sector — especially in AI.

Microsoft CEO Satya Nadella spoke publically after the Altman incident, addressing his remarks to his investors: "We'll make sure that the governance system is reformed so that we have more guarantees and so that we have no surprises."

After Altman's reinstatement, Microsoft's share price rose.

It's not unusual for start-ups to have unconventional or immature governance systems, but that dilemma is most acute in the tech world.

The biggest, and perhaps the only loser, is safety-first AI.

font change