OpenAI CEO calls DeepSeek’s AI model impressive

DeepSeek trains its V3 model with less than $6 million in computing power using Nvidia’s lower-capability H800 chips

OpenAI CEO Sam Altman described Chinese startup DeepSeek’s R1 AI model as “impressive” but reaffirmed OpenAI’s focus on greater computing power as a key driver of its success.

DeepSeek gained attention for training its V3 model using less than $6 million in computing power with Nvidia’s lower-capability H800 chips.

DeepSeek-R1, launched last week, is reported to be 20 to 50 times more cost-effective than OpenAI’s O1 model, depending on the task, according to DeepSeek’s WeChat account. “DeepSeek’s R1 is an impressive model, particularly around what they’re able to deliver for the price,” Altman wrote on X, while emphasizing that OpenAI’s research roadmap prioritizes high computing power for future advancements.

The emergence of DeepSeek has sparked doubts over the scale of AI investment by U.S. tech companies, with some questioning whether billions of dollars in pledged spending are justified. The development has also impacted major tech stocks, with Nvidia experiencing a record $593 billion single-day market value loss on Monday, the largest one-day loss in Wall Street history.

Monitoring Desk
Monitoring Desk
Our monitoring team diligently searches the vast expanse of the web to carefully handpick and distill top-tier business and economic news stories and articles, presenting them to you in a concise and informative manner.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read