The cost per message, then, is $20 / 36 000 = $.00055…
With 300 messages per month, the compute cost for the AI vendor is 300*$20/36000 = $0.16 / month per user. By contrast, a subscription costs $20.
So given these assumptions, it’s other things (like R&D, safety research, training runs, free accounts, etc) that represent the bulk of the cost and those could be scaled down to turn a profit. What will they do? Give how hyped AI is currently and the competitive landscape, I don’t think they’ll increase prices that much. We have products like DeepSeek on the horizon which are much cheaper, so it’s more likely that they squeeze money out of it by becoming more efficient.
Those H100s are $25k minimum. So $200,000 just in GPUs. Drawing 700W each, or 5.6kW total. At my local prices that’s about a dollar per hour just for electricity.
It’s going to take you a couple of years to break even at $20/h. They might still hold some value at that point. Or they might be obsolete.
Let’s do some estimates:
The throughput of a node is
300 concurrent * (3600 / 30) = 36 000 messages / hour.
The cost per message, then, is $20 / 36 000 = $.00055…
With 300 messages per month, the compute cost for the AI vendor is 300*$20/36000 = $0.16 / month per user. By contrast, a subscription costs $20.
So given these assumptions, it’s other things (like R&D, safety research, training runs, free accounts, etc) that represent the bulk of the cost and those could be scaled down to turn a profit. What will they do? Give how hyped AI is currently and the competitive landscape, I don’t think they’ll increase prices that much. We have products like DeepSeek on the horizon which are much cheaper, so it’s more likely that they squeeze money out of it by becoming more efficient.
It’s a weird market.
Those H100s are $25k minimum. So $200,000 just in GPUs. Drawing 700W each, or 5.6kW total. At my local prices that’s about a dollar per hour just for electricity.
It’s going to take you a couple of years to break even at $20/h. They might still hold some value at that point. Or they might be obsolete.
Well that entirely depends on your users… coding agents or in general agents that run for hours will crash your calculation
That won’t happen due to token limits. According to Anthropic, only about 5% of users hit the limit.
Exactly. Then you move up to the $100 or $200 or per token API pricing levels.