Anyhow. 1 Exaflop means it would take about a day to do as much computation as was used to train GPT-3. (10^23 ops is 5 ooms more than 10^18, so 100,000 seconds.) So in 100 days, could do something 2 OOMs bigger than GPT-3. So… I guess the conclusion is that scaling up the AI compute trend by +2 OOMs will be fairly easy, and scaling up by +3 or +4 should be feasible in the next five years. But +5 or +6 seems probably out of reach? IDK I’d love to see someone model this in more detail. Probably the main thing to look at is total NVIDIA or TSMC annual AI-focused chip production.
than… what? I didn’t catch what they were comparing to.
They say they can do another 10x improvement in the next generation.
If they are comparing to the state of the art, that’s a big deal I guess?
https://www.youtube.com/watch?v=j0z4FweCy4M Here Elon says their new robot is designed to be slow and weak so that humans can outrun it and overpower it if need be, because “you never know. [pause] Five miles an hour, you’ll be fine. HAHAHA. Anyways… ”
Example tasks it should be able to do: “Pick up that bolt and attach it to that car. Go to the store and buy me some groceries.”
Prototype should be built next year.
The code name for the robot is Optimus. The Dojo simulation lead guy said they’ll be focusing on helping out with the Optimus project in the near term. That’s exciting because the software part is the hard part; it really seems like they’ll be working on humanoid robot software/NN training in the next year or two.
Notes on Tesla AI day presentation:
https://youtu.be/j0z4FweCy4M?t=6309 Here they claim they’ve got more than 10,000 GPUs in their supercomputer, and that this means their computer is more powerful than the top 5 publicly known supercomputers in the world. Consulting this list https://www.top500.org/lists/top500/2021/06/ it seems that this would put their computer at just over 1 Exaflop per second, which checks out (I think I had heard rumors this was the case) and also if you look at this https://en.wikipedia.org/wiki/Computer_performance_by_orders_of_magnitude to get a sense of how many flops GPUs do these days (10^13? 10^14?) it checks out as well, maybe.
Anyhow. 1 Exaflop means it would take about a day to do as much computation as was used to train GPT-3. (10^23 ops is 5 ooms more than 10^18, so 100,000 seconds.) So in 100 days, could do something 2 OOMs bigger than GPT-3. So… I guess the conclusion is that scaling up the AI compute trend by +2 OOMs will be fairly easy, and scaling up by +3 or +4 should be feasible in the next five years. But +5 or +6 seems probably out of reach? IDK I’d love to see someone model this in more detail. Probably the main thing to look at is total NVIDIA or TSMC annual AI-focused chip production.
https://youtu.be/j0z4FweCy4M?t=7467 Here it says Tesla’s AI training computer is:
4x more compute per dollar
1.3x more energy-efficient
than… what? I didn’t catch what they were comparing to.
They say they can do another 10x improvement in the next generation.
If they are comparing to the state of the art, that’s a big deal I guess?
https://www.youtube.com/watch?v=j0z4FweCy4M Here Elon says their new robot is designed to be slow and weak so that humans can outrun it and overpower it if need be, because “you never know. [pause] Five miles an hour, you’ll be fine. HAHAHA. Anyways… ”
Example tasks it should be able to do: “Pick up that bolt and attach it to that car. Go to the store and buy me some groceries.”
Prototype should be built next year.
The code name for the robot is Optimus. The Dojo simulation lead guy said they’ll be focusing on helping out with the Optimus project in the near term. That’s exciting because the software part is the hard part; it really seems like they’ll be working on humanoid robot software/NN training in the next year or two.