Elon Musk, one of the busiest men in the world, has unveiled a new project to “understand the universe”. The new AI project, named xAI, follows Musk’s concerns over the prospect of an uncontrollable future similar to the Terminator films. This new venture aims to create an AI that is “maximally curious” about humans, rather than being programmed with specific moral guidelines.

Elon Musk has gone out to mention numerous times that programming artificial intelligence with morality can be incredibly dangerous, stating that it is much easier to manipulate such as system.

“If you programme a certain reality [into an AI] you have to say what morality you are programming. Whose decision is that?”

Elon Musk

Adding that once an AI is programmed with a specific moral standpoint, it would be easier to prompt it into reversing it. This is known as the Waluigi Effect.

Elon Musk was one of the numerous individuals who had signed up for the open letter signalling a slowdown of powerful ai development and models such as ChatGPT. Musk has always been vocal that a pause was necessary, as we do not have the capabilities yet to fully safeguard the potential threats.

However, this comes as no surprise that he now believes that such a pause is unrealistic, and is also optimistic that xAI will offer an alternative and safer path compared to his competitors at Google and OpenAI.

Despite his previous comments and concerns about AI, Elon Musk does see a positive scenario where AI done right can allow humanity to enter a golden age of prosperity. Of course, like many notable figures who are involved in this technology, Musk also sees grave potential danger and a possible risk of a dystopian future.

Not only that, he stated back in early 2014 that superintelligence – AI far more capable and surpassing human cognitive abilities – could be much closer than experts are led to believe, as he predicts it might be possible within the next five or six years.

Not only that, with the Pentagon increasingly curious about AI and its uses on the battlefield, more and more potential dangers are lurking on the horizon.

With his idea behind xAI, Musk has openly admitted that it would take time for his initiative to reach the capabilities of OpenAI and Google.

The xAI team comprises talented former engineers and researchers from distinguished companies like Google, DeepMind and Microsoft. Some of the team at xAI goes as follows:

  • Igor Babuschkin – A former engineer at DeepMind.
  • Tony Wu – Who has worked at Google.
  • Christian Szegedy – A research scientist at Google.
  • Greg Yang – Formerly Microsoft.
  • Dan Hendrycks – Director of The Centre for AI Safety [advisory role].

Only time will tell if this is just a vanity project from Elon to shift the spotlight of Twitter’s recent issues, or that something genuine and groundbreaking will come from this. It may take some time, but it will be curious to see how this move will change the strategy of OpenAI and Microsoft. With Sam Altman already coming out to say that GPT5 is not in development yet.

The bigger question is, should we allow these billionaires and large corporations to treat AI as a massive playground and dictate the future of AI?