And in a way, they ought to be rolling in even more compute than it looks because they are so much more focused: Anthropic isn’t doing image generation, it isn’t doing voice synthesis, it isn’t doing video generation… (As far as we know they aren’t researching those, and definitely not serving it to customers like OA or Google.) It does text LLMs. That’s it.
But nevertheless, an hour ago, working on a little literary project, I hit Anthropic switching my Claude to ‘concise’ responses to save compute. (Ironically, I think that may have made the outputs better, not worse, for that project, because Claude tends to ‘overwrite’, especially in what I was working on.)
I’d guess that the amount spent on image and voice is negligible for this BOTEC?
I do think that the amount spent on inference for customers should be a big deal though. My understanding is that OpenAI has a much bigger userbase than Anthropic. Shouldn’t that mean that, all else equal, Anthropic has more compute to spare for training & experiments? Such that if Anthropic has about as much compute total, they in effect have a big compute advantage?
And in a way, they ought to be rolling in even more compute than it looks because they are so much more focused: Anthropic isn’t doing image generation, it isn’t doing voice synthesis, it isn’t doing video generation… (As far as we know they aren’t researching those, and definitely not serving it to customers like OA or Google.) It does text LLMs. That’s it.
But nevertheless, an hour ago, working on a little literary project, I hit Anthropic switching my Claude to ‘concise’ responses to save compute. (Ironically, I think that may have made the outputs better, not worse, for that project, because Claude tends to ‘overwrite’, especially in what I was working on.)
I’d guess that the amount spent on image and voice is negligible for this BOTEC?
I do think that the amount spent on inference for customers should be a big deal though. My understanding is that OpenAI has a much bigger userbase than Anthropic. Shouldn’t that mean that, all else equal, Anthropic has more compute to spare for training & experiments? Such that if Anthropic has about as much compute total, they in effect have a big compute advantage?