AI’s Power Struggle: Musk, Altman, and the Balance of Ethics versus Innovation
Harry Mealia
What was once a formidable partnership has now turned toxic; Elon Musk and Sam Altman, former partners and co-founders of Open AI, find themselves embroiled in a public lawsuit. While ChatGPT may have originally brought eyes to the company, it is the new alliance with Microsoft and an 80 billion dollar valuation that has garnered attention once again. OpenAI’s recent ventures and creation of a ‘for-profit’ firm forms the basis of Musk’s legal action. What is not as widely acknowledged however, is the compelling history Musk has with Altman and OpenAI. Their relationship and past dealings display the intricate balance between innovation and ethics in the new world of artificial intelligence.
The Butting Heads of AI
OpenAI started in 2015 as a non-profit AI research company with a mission of ensuring that ‘artificial intelligence benefits all of humanity’. Musk was a part of this team. In a recent blog post, OpenAI explained how they quickly realised that their non-profit structure would see them struggle to raise the necessary capital to meet their goals in generative AI. During discussions of a commercialised arm of the company, Musk leveraged his financial power, suggesting a Tesla merger alongside positioning himself as CEO. While this may have allowed for accelerated growth for the company, it violated the firm’s ethics of giving absolute power to one person. Musk soon left OpenAI and eventually created his own competitor (GrokAI). During this transition however, following Musk’s departure, OpenAI would be met with further personnel complications.
In November 2023, OpenAI announced that Altman would be departing, leaving behind his role as CEO. The company explained a review process concluded that “he was not consistently candid in his communications with the board”. This decision was met with great resistance throughout Silicon Valley, including the threat of existing employees willing to resign if Altman was not reinstated, seen by Greg Brockman, chair of the board, leaving his role as president. The resulting backlash was so strong that Altman was reinstated 5 days later at the conclusion of an external investigation from WilmerHale law firm. The level of volatility and confusion faced by OpenAI during this turbulent period is often attributed to the unique organisational structure of the firm.
New Frontiers for the AI Market
OpenAI now faces new competition in the form of xAI, Musk’s new startup; their new chatbot Grok is similar to the widely known ChatGPT. Grok is advertised as a more flexible alternative with the ability to provide witty answers as well as a willingness to answer controversial questions typically avoided by other chatbots. Their standout offering is that Grok can access X (formerly Twitter), to provide information on current affairs. While competitors Gemini and ChatGPT may not have the same offering, there are questions surrounding how accurate Grok can be seeing as information is drawn from X, a site known to have a colourful history with filtering factual information. On 11th March another update on Grok was released when Musk announced via X that his startup xAI will open source Grok; this means that anyone can access, view and modify the code as they please. This offers increased accessibility and transparency that ChatGPT and Gemini currently do not provide.
Not only is the market for artificial intelligence becoming a highly profitable one, as proved by OpenAI’s valuation, but it is also one that provides serious threats. This has led to several high-profile players joining in such as Google, Microsoft and Facebook. Although Musk and Altman created the headlining OpenAI and ChatGPT that ignited this AI boom, their relationship has now ceased to exist with Musk going as far as saying “Open AI is a lie”. With this conflict, the question now arises for the future of artificial intelligence: is Musk’s lawsuit against OpenAI simply a revenge mission with hopes of eliminating any competition for his own chatbot or is there a genuine concern for protecting against the dangers of AI?
