Elon Musk lists his three most vital elements for AI

Metro Loud
5 Min Read


Elon Musk, chief government officer of Tesla Inc., throughout the US-Saudi Funding Discussion board on the Kennedy Heart in Washington, DC, US, on Wednesday, Nov. 19, 2025.

Bloomberg | Bloomberg | Getty Pictures

Elon Musk has once more sounded the alarm on the hazards of AI and listed what he considers because the three most vital elements to make sure a constructive future with the know-how.

The billionaire CEO of Tesla, SpaceX, xAI, X and The Boring Firm, appeared on a podcast with Indian billionaire Nikhil Kamath on Sunday.

“It isn’t that we’re assured to have a constructive future with AI,” Musk stated on the podcast. “There’s some hazard whenever you create a strong know-how, {that a} highly effective know-how could be probably harmful.”

Musk was a co-founder of OpenAI alongside Sam Altman, however left its board in 2018 and publicly criticized the corporate for ditching its founding mission as a non-profit to develop AI safely after it launched ChatGPT in 2022. Musk’s xAI developed its personal chatbot, Grok, in 2023.

Musk has beforehand warned that “one of many largest dangers to the way forward for civilization is AI,” and confused that fast developments are main AI to turn out to be a much bigger danger to society than automobiles or planes or medicines.

On the podcast, the tech billionaire emphasised the significance of guaranteeing AI applied sciences pursue fact as an alternative of repeating inaccuracies. “That may be very harmful,” Musk informed Kamath, who can also be the co-founder of retail stockbroker Zerodha.

“Fact and sweetness and curiosity. I feel these are the three most vital issues for AI,” he stated.

He stated that, with out strictly adhering to truths, AI will be taught info from on-line sources the place it “will take in lots of lies after which have bother reasoning as a result of these lies are incompatible with actuality.”

He added: “You can also make an AI go insane should you pressure it to imagine issues that are not true as a result of it is going to result in conclusions which are additionally dangerous.”

“Hallucination” — responses which are incorrect or deceptive — is a significant problem going through AI. Earlier this 12 months, an AI function launched by Apple on its iPhones generated pretend information alerts.

These included a false abstract from the BBC Information app notifications on a narrative in regards to the PDC World Darts Championship semi-final, the place it wrongly claimed that the British darts participant Luke Littler had gained the championship. Littler didn’t win the match’s ultimate till the subsequent day.

Apple informed the BBC on the time that it was engaged on an replace to resolve the issue which clarifies when Apple Intelligence is chargeable for the textual content proven within the notifications.

Musk added that “some appreciation of magnificence is vital” and that “you already know it whenever you see it.”

Musk stated AI ought to wish to know extra in regards to the nature of actuality as a result of humanity is extra fascinating than machines.

“It is extra fascinating to see the continuation if not the prosperity of humanity than to exterminate humanity,” he stated.

Geoffrey Hinton, a pc scientist and ex-vice president at Google often called a “Godfather of AI,” stated earlier this 12 months that there is a “10% to twenty% probability” that AI will “wipe us out,” in an episode of the Diary of a CEO podcast. A few of the shorter time period dangers he cited included hallucinations and the automation of entry stage jobs.

“The hope is that if sufficient sensible individuals do sufficient analysis with sufficient assets, we’ll work out a option to construct them in order that they’ll by no means wish to hurt us,” Hinton added.

Share This Article