Yes and I stand by that assertion. The above will work and does already work in some cases (self driving is very close) to human level. It’s eventually 1000 time savings in task domains like mining, farming, logistics, materials processing, manufacturing, cleaning.
Not necessarily prototype fusion reactor construction specifically, but possibly over the fusion industry once engineers find a design that works.
I was thinking it would help—something like CERN which is similar to what a fusion reactor will look like has a whole bunch of ordinary stuff in it. Lots of roughly dug tunnels, concrete, handrails, racks of standard computers that you would see in an office, and so on. Large assemblies that need to be trucked in. Each huge instrument assembly is made of simpler parts.
If robots do all that it still saves time. (Probably less than 90 percent of the time)
You are correct that a neural sim probably won’t cover repair. You have seen Nvidia has neural sims. I was assuming you first classify from sensor fusion (many cameras, lidar, etc) to a representation of the state space then from that representation query a sim to predict the next frames for that state space.
A hybrid sim would be where you use both a physics engine and a neural network to fine tune the results.(such as in series, or by overriding intermediate timestep frames)
Training one is pretty straightforward, you save your predictions from last frame and then compare them to what the real world did the next frame. (It’s more complex than that because you predict a distribution of outcomes and need a lot more than 1 frame from the real world to correct your probabilities)
This is also a good way to know when the machine is over its head. For example if it spilt coffee on the laptop, and the machine has no understanding of liquid damage but does need to open a bash shell, the laptop screen will likely be blank or crashed, which won’t be what the machine predicted as an outcome after trying to start the laptop.
Most humans can’t fix a laptop either and a human will just ask a specialist to repair or replace it. So that’s one way for the machine to handle, or it can ask a human.
This is absolutely advanced future AI and gradually as humans fix bugs the robots would begin to exceed human performance. (Partially just from faster hardware). But my perspective is I am saying “ok this is what we have, what’s a reasonable way to proceed to something we can safely use in the near future”.
It seems you are assuming humans skip right to very dangerous ASI level optimizers before robots can reliably pour coffee. That may not be a reasonable world model.
Yes and I stand by that assertion. The above will work and does already work in some cases (self driving is very close) to human level. It’s eventually 1000 time savings in task domains like mining, farming, logistics, materials processing, manufacturing, cleaning.
Not necessarily prototype fusion reactor construction specifically, but possibly over the fusion industry once engineers find a design that works.
I was thinking it would help—something like CERN which is similar to what a fusion reactor will look like has a whole bunch of ordinary stuff in it. Lots of roughly dug tunnels, concrete, handrails, racks of standard computers that you would see in an office, and so on. Large assemblies that need to be trucked in. Each huge instrument assembly is made of simpler parts.
If robots do all that it still saves time. (Probably less than 90 percent of the time)
You are correct that a neural sim probably won’t cover repair. You have seen Nvidia has neural sims. I was assuming you first classify from sensor fusion (many cameras, lidar, etc) to a representation of the state space then from that representation query a sim to predict the next frames for that state space.
A hybrid sim would be where you use both a physics engine and a neural network to fine tune the results.(such as in series, or by overriding intermediate timestep frames)
Training one is pretty straightforward, you save your predictions from last frame and then compare them to what the real world did the next frame. (It’s more complex than that because you predict a distribution of outcomes and need a lot more than 1 frame from the real world to correct your probabilities)
This is also a good way to know when the machine is over its head. For example if it spilt coffee on the laptop, and the machine has no understanding of liquid damage but does need to open a bash shell, the laptop screen will likely be blank or crashed, which won’t be what the machine predicted as an outcome after trying to start the laptop.
Most humans can’t fix a laptop either and a human will just ask a specialist to repair or replace it. So that’s one way for the machine to handle, or it can ask a human.
This is absolutely advanced future AI and gradually as humans fix bugs the robots would begin to exceed human performance. (Partially just from faster hardware). But my perspective is I am saying “ok this is what we have, what’s a reasonable way to proceed to something we can safely use in the near future”.
It seems you are assuming humans skip right to very dangerous ASI level optimizers before robots can reliably pour coffee. That may not be a reasonable world model.