No it doesn’t, not unless Groq wants to discuss publicly what the cost of that hardware was and it turns out to be, to everyone’s shock, well under $5m… (And you shouldn’t trust any periodical which wastes half an article on the topic of what Groq & Grok have to do with each other. There are many places you can get AI news, you don’t have to read Coin Telegraph.)
The introduction of LPU(https://wow.groq.com/GroqDocs/TechDoc_Latency.pdf) changes the field completely on scaling laws, pivoting us to matters like latency.
No it doesn’t, not unless Groq wants to discuss publicly what the cost of that hardware was and it turns out to be, to everyone’s shock, well under $5m… (And you shouldn’t trust any periodical which wastes half an article on the topic of what Groq & Grok have to do with each other. There are many places you can get AI news, you don’t have to read Coin Telegraph.)
Mmmh ok, I guess let us keep an eye out.