Making Sense of Tech       Home        Blog        Templates        Usecases        Contact

 

A new AI breakthrough is making waves in mainstream media and financial markets. For those outside the tech world - why does this matter?

What happened (in a nutshell)

Chinese startup Deepseek has unveiled their R1 LLM, demonstrating reasoning capabilities that until recently were setting OpenAI's flagship model ahead of the competition. Their approach to creating it challenges the so-called "scaling law", which has driven billions in investments into AI startups and tech companies.

The full picture

  • OpenAI's projected operational costs for AI model training and running in 2024: $7 billion. Within Project Stargate, investments into AI infrastructure in the US of $ 500 billion were announced.
  • Deepseek's total training investment for their competitive model: under $6 million, while facing trade restrictions and lacking access to latest computing hardware (GPUs).
  • The result? Comparable performance at a fraction of the cost through novel, more efficient training methods.

This raises an interesting question about the "scaling law" - the principle that bigger models, more data, and more computing power (read: more money) automatically lead to better AI performance. It's this very principle that has justified massive investments in companies like OpenAI.

But if smaller players can achieve similar results at a fraction of the cost - why invest billions of dollars in the first place?

To be clear: this does not mean that DeepSeek is more innovative than OpenAI & Co and will now lead the market. But it shows that there are much more ways to innovate and advance AI technology that don’t rely on big money.

In my latest book, I wrote that further investments into hyped AI startups will dry out sooner or later, and focus will shift towards what business value we can create with the models we already have. Maybe we reached that point rather sooner?

 

Imprint        © Dominik Hörndlein 2025, all rights reserved.