Chinese AI startup DeepSeek disclosed cost and revenue estimates for its V3 and R1 models on Saturday, revealing a theoretical daily cost-profit ratio of up to 545%.
However, the Hangzhou-based company cautioned that actual revenue remains significantly lower due to pricing structures and limited monetization.
This is the first time DeepSeek has shared details about its profit margins from inference tasks, the phase where trained AI models generate outputs, such as chatbot responses. The disclosure comes amid global AI market volatility, with DeepSeek’s rapid rise already pressuring AI stocks outside China.
In January, AI shares fell sharply after its R1 and V3-powered chatbots gained traction worldwide.
DeepSeek previously claimed it spent under $6 million on training chips, significantly less than the billions invested by U.S. rivals like OpenAI. The firm also used Nvidia’s H800 chips, which are less advanced than those available to U.S. companies, fueling investor skepticism over the high capital expenditures by American AI firms.
In a GitHub post, DeepSeek estimated that if the rental cost of one H800 chip is $2 per hour, its total daily inference cost for V3 and R1 is $87,072. The theoretical daily revenue from these models stands at $562,027, translating to an annual revenue potential of over $200 million.
However, the company clarified that actual earnings are lower due to lower pricing for V3, free web and app access, and discounted developer usage during off-peak hours.