This is a long term project for those future centuries that happen in a year.
Unless the project is completely simulable (example go or chess), then you’re rate limited by the slowest serial steps. This is just Amdahls law.
I mean absolutely this will help and you also can build your prototypes in parallel or test the resulting new product in parallel, but the serial time for a test becomes limiting for the entire system.
For example, if the product is a better engine, you evaluate all data humans have ever recorded on engines, build an engine sim, and test many possibilities. But there is residual uncertainty in any sim—to resolve this you need thousands of experiments. If you can do all experiments in parallel you still must wait the length of time for a single experiment.
Accelerated lifecycle testing is where you compress several years of use into a few weeks of testing. You do this by elevating the operating temperature and running the device under extreme load for the test period.
So if you do “1 decade” of engine design in 1 day, come up with 1000 candidate designs, then you manufacture all 1000 in parallel over 1 month (casting and machining have slow steps), then 1 more month of accelerated lifecycle testing.
Suppose you need to do this iteration loop 10 times to get to “rock solid” designs better than currently used ones. Then it took you 20 months, vs 100 years for humans to do it.
50 times speedup is enormous but it’s not millions. I think a similar argument applies for most practical tasks. Note that tasks like “design a better aircraft” have the same requirement for testing. Design better medicine crucially does.
One task that seems like an obvious one for AI R&D: design a better AI processor, is notable because current silicon fabrication processes take months of machine time, and so are a pretty extreme example, where you must wait months between iteration cycles.
Also note you needed enough to robotic equipment to do the above. Since robots can build robots thats not going to take long, but you have several years or so of “latency” to get enough robots.
Any testing can be done in simulation, as long as you have a simulator and it’s good enough. A few hundreds times speedup in thinking allows very quickly writing very good specialized software for learning and simulation of all relevant things, based on theory that’s substantially better. The speed of simulation might be a problem, and there’s probably a need for physical experiments to train the simulation models (but not to directly debug object level engineering artifacts).
Still, in the physical world activity of an unfettered 300x speed human level AGI probably looks like building tools for building tools without scaling production and on first try, rather than cycles of experiments and reevaluation and productization. I suspect macroscopic biotech might be a good target. It’s something obviously possible (as in animals) and probably amenable to specialized simulation. This might take some experiments to pin down, but probably not years of experiments, as at every step it takes no time at all to very judiciously choose what data to collect next. There is already a bootstrapping technology, fruit fly biomass doubles every 2 days, energy from fusion will help with scaling, and once manufactured, cells can reconfigure.
A millionfold speedup in thinking (still assuming no superintelligence) probably requires hardware that implies ability to significantly speed up simulations.
No this hardware would not make simulations faster. Different hardware could speed it up some, but since simulations already are done on supercomputers running at multiple ghz, the speedup would be about 1 OOM, as this is typical for going from general purpose processors to ASICs. It would still be the bottleneck.
This lesswrong post argues pretty convincingly that simulations cannot model everything, especially for behavior relevant to nanotechnology and medicine:
Assuming the physicist who wrote that lesswrong post is correct, cycles of trial and error and prototyping and experiments are unavoidable.
I also agree with the post for a different reason : real experimental data, such as human written papers on biology or nanoscale chemistry, leave enough uncertainty to fit trucks through. The issue is that you have hand copied fields of data, large withheld datasets because they had negative findings, needlessly vague language to describe what was done, different labs at different places with different staff and equipment, different subjects (current tech cannot simulate or build a living mockup of a human body and there is insufficient data due to the above to do either), and so on.
You have to try things, even if it’s just to collect data you will use in your simulation, and ‘trying stuff’ is slow. (mammalian cells take hours to weeks to grow complex structures. electron beam nanolathes take hours to carve a new structure. etc.)
When you design a thing, you can intentionally make it more predictable and faster to test, in particular with modularity. If the goal is designing cells that grow and change in controllable ways, all experiments are tiny. Like with machine learning, new observations from the experiments generalize by improving the simulation tools, not just object level designs. And much more advanced theory of learning should enable much better sample efficiency with respect to external data.
If millionfold speedup is currently already feasible, it doesn’t take hardware advancement and as a milestone indicates no hardware benefit for simulation. That point responded to the hypothetical where there is already massive scaling in hardware compared to today (such as through macroscopic biotech to scale physical infrastructure), which should as another consequence make simulation of physical designs much better (on its own hardware specialized for being good at simulation). For example, this is where I expect uploading to become feasible to develop, not at the 300x speedup stage of software-only improvement, because simulating wild systems is harder than designing something predictable.
(This is exploratory engineering not forecasting, I don’t actually expect human level AGI without superintelligence to persist that long, and if nanotech is possible I don’t expect scaling of macroscopic biotech. But neither seems crucial.)
When you design a thing, you can intentionally make it more predictable and faster to test,
Absolutely. This happens today, where there is only time in silicon release cycles for a few revisions.
My main point with the illustrative numbers was to show how the time complexity works. You have this million times faster AI—it can do 10 years of work in 2.24 minutes it seems.
(Assuming a human is working 996 or 72 hours a week)
Even if we take the most generous possible assumptions about how long it takes to build something real and test it, then fix your mistakes, the limiting factors are 43,000 times slower than we can think. Say we reduce our serial steps for testing and only need 2 prototypes and then the final version instead of 10.
So we made it 3 times faster!
So the real world is still slowing us down by a factor of 14,400.
Machining equipment takes time to cut an engine, nano lathe a part, or if we are growing human organs to treat VIPs it takes months for them to grow. Same for anything else you think of. Real world just has all these slow steps, from time for concrete to cure, paint to dry, molten metal in castings to cool, etc.
This is succinctly why FOOM is unlikely. But 5-50 years of research in 1 year is still absolutely game changing.
Machining equipment takes time to cut an engine, nano lathe a part, or if we are growing human organs to treat VIPs it takes months for them to grow.
That’s why you don’t do any of the slower things at all (in a blocking way), and instead focus on the critical path of controllable cells for macroscopic biotech or something like that, together with the experiments needed to train simulators good enough to design them. This enables exponentially scaling physical infrastructure once completed, which can be used to do all the other things. Simulation is not the methods of today, it’s all the computational shortcuts to making the correct predictions about the simulated systems that the AGIs can come up with in subjective centuries of thinking, with a few experimental observations to ground the thinking. And once the initial hardware scaling project is completed, it enables much better simulation of more complicated things.
You can speed things up. The main takeaway is there’s 4 orders of magnitude here. Some projects that involve things like interplanetary transits to setup are going to be even slower than that.
And you will most assuredly start out at 4 oom slower bootstrapping from today’s infrastructure. Yes maybe you can eventually develop all the things you mentioned, but there are upfront costs to develop them. You don’t have programmable cells or self replicating nanotechnology when you start, and you can’t develop them immediately just by thinking about it for thousands of years.
This specifically is an argument again sudden and unexpected “foom” the moment agi exists. If 20-50 years later in a world full of robots and rapid nanotechnology and programmable biology you start to see exponential progress that’s a different situation.
Projects that involve interplanetary transit are not part of the development I discuss, so they can’t slow it down. You don’t need to wait for paint to dry if you don’t use paint.
There are no additional pieces of infrastructure that need to be in place to make programmable cells, only their design and what modern biotech already has to manufacture some initial cells. It’s a question of sample efficiency in developing simulation tools, how many observations does it take for simulation tools to get good enough, if you had centuries to design the process of deciding what to observe and how to make use of the observations to improve the simulation tools.
So a crux might be impossibility of creating the simulation tools with data that can be collected in the modern world over a few months. It’s an issue distinct from inability to develop programmable cells.
Unless the project is completely simulable (example go or chess), then you’re rate limited by the slowest serial steps. This is just Amdahls law.
I mean absolutely this will help and you also can build your prototypes in parallel or test the resulting new product in parallel, but the serial time for a test becomes limiting for the entire system.
For example, if the product is a better engine, you evaluate all data humans have ever recorded on engines, build an engine sim, and test many possibilities. But there is residual uncertainty in any sim—to resolve this you need thousands of experiments. If you can do all experiments in parallel you still must wait the length of time for a single experiment.
Accelerated lifecycle testing is where you compress several years of use into a few weeks of testing. You do this by elevating the operating temperature and running the device under extreme load for the test period.
So if you do “1 decade” of engine design in 1 day, come up with 1000 candidate designs, then you manufacture all 1000 in parallel over 1 month (casting and machining have slow steps), then 1 more month of accelerated lifecycle testing.
Suppose you need to do this iteration loop 10 times to get to “rock solid” designs better than currently used ones. Then it took you 20 months, vs 100 years for humans to do it.
50 times speedup is enormous but it’s not millions. I think a similar argument applies for most practical tasks. Note that tasks like “design a better aircraft” have the same requirement for testing. Design better medicine crucially does.
One task that seems like an obvious one for AI R&D: design a better AI processor, is notable because current silicon fabrication processes take months of machine time, and so are a pretty extreme example, where you must wait months between iteration cycles.
Also note you needed enough to robotic equipment to do the above. Since robots can build robots thats not going to take long, but you have several years or so of “latency” to get enough robots.
Any testing can be done in simulation, as long as you have a simulator and it’s good enough. A few hundreds times speedup in thinking allows very quickly writing very good specialized software for learning and simulation of all relevant things, based on theory that’s substantially better. The speed of simulation might be a problem, and there’s probably a need for physical experiments to train the simulation models (but not to directly debug object level engineering artifacts).
Still, in the physical world activity of an unfettered 300x speed human level AGI probably looks like building tools for building tools without scaling production and on first try, rather than cycles of experiments and reevaluation and productization. I suspect macroscopic biotech might be a good target. It’s something obviously possible (as in animals) and probably amenable to specialized simulation. This might take some experiments to pin down, but probably not years of experiments, as at every step it takes no time at all to very judiciously choose what data to collect next. There is already a bootstrapping technology, fruit fly biomass doubles every 2 days, energy from fusion will help with scaling, and once manufactured, cells can reconfigure.
A millionfold speedup in thinking (still assuming no superintelligence) probably requires hardware that implies ability to significantly speed up simulations.
Hardware: https://www.lesswrong.com/posts/adadYCPFAhNqDA5Ye/processor-clock-speeds-are-not-how-fast-ais-think?commentId=Nd8h72ZqSJfsMJK8M
No this hardware would not make simulations faster. Different hardware could speed it up some, but since simulations already are done on supercomputers running at multiple ghz, the speedup would be about 1 OOM, as this is typical for going from general purpose processors to ASICs. It would still be the bottleneck.
This lesswrong post argues pretty convincingly that simulations cannot model everything, especially for behavior relevant to nanotechnology and medicine:
https://www.lesswrong.com/posts/etYGFJtawKQHcphLi/bandgaps-brains-and-bioweapons-the-limitations-of
Assuming the physicist who wrote that lesswrong post is correct, cycles of trial and error and prototyping and experiments are unavoidable.
I also agree with the post for a different reason : real experimental data, such as human written papers on biology or nanoscale chemistry, leave enough uncertainty to fit trucks through. The issue is that you have hand copied fields of data, large withheld datasets because they had negative findings, needlessly vague language to describe what was done, different labs at different places with different staff and equipment, different subjects (current tech cannot simulate or build a living mockup of a human body and there is insufficient data due to the above to do either), and so on.
You have to try things, even if it’s just to collect data you will use in your simulation, and ‘trying stuff’ is slow. (mammalian cells take hours to weeks to grow complex structures. electron beam nanolathes take hours to carve a new structure. etc.)
When you design a thing, you can intentionally make it more predictable and faster to test, in particular with modularity. If the goal is designing cells that grow and change in controllable ways, all experiments are tiny. Like with machine learning, new observations from the experiments generalize by improving the simulation tools, not just object level designs. And much more advanced theory of learning should enable much better sample efficiency with respect to external data.
If millionfold speedup is currently already feasible, it doesn’t take hardware advancement and as a milestone indicates no hardware benefit for simulation. That point responded to the hypothetical where there is already massive scaling in hardware compared to today (such as through macroscopic biotech to scale physical infrastructure), which should as another consequence make simulation of physical designs much better (on its own hardware specialized for being good at simulation). For example, this is where I expect uploading to become feasible to develop, not at the 300x speedup stage of software-only improvement, because simulating wild systems is harder than designing something predictable.
(This is exploratory engineering not forecasting, I don’t actually expect human level AGI without superintelligence to persist that long, and if nanotech is possible I don’t expect scaling of macroscopic biotech. But neither seems crucial.)
Absolutely. This happens today, where there is only time in silicon release cycles for a few revisions.
My main point with the illustrative numbers was to show how the time complexity works. You have this million times faster AI—it can do 10 years of work in 2.24 minutes it seems.
(Assuming a human is working 996 or 72 hours a week)
Even if we take the most generous possible assumptions about how long it takes to build something real and test it, then fix your mistakes, the limiting factors are 43,000 times slower than we can think. Say we reduce our serial steps for testing and only need 2 prototypes and then the final version instead of 10.
So we made it 3 times faster!
So the real world is still slowing us down by a factor of 14,400.
Machining equipment takes time to cut an engine, nano lathe a part, or if we are growing human organs to treat VIPs it takes months for them to grow. Same for anything else you think of. Real world just has all these slow steps, from time for concrete to cure, paint to dry, molten metal in castings to cool, etc.
This is succinctly why FOOM is unlikely. But 5-50 years of research in 1 year is still absolutely game changing.
That’s why you don’t do any of the slower things at all (in a blocking way), and instead focus on the critical path of controllable cells for macroscopic biotech or something like that, together with the experiments needed to train simulators good enough to design them. This enables exponentially scaling physical infrastructure once completed, which can be used to do all the other things. Simulation is not the methods of today, it’s all the computational shortcuts to making the correct predictions about the simulated systems that the AGIs can come up with in subjective centuries of thinking, with a few experimental observations to ground the thinking. And once the initial hardware scaling project is completed, it enables much better simulation of more complicated things.
You can speed things up. The main takeaway is there’s 4 orders of magnitude here. Some projects that involve things like interplanetary transits to setup are going to be even slower than that.
And you will most assuredly start out at 4 oom slower bootstrapping from today’s infrastructure. Yes maybe you can eventually develop all the things you mentioned, but there are upfront costs to develop them. You don’t have programmable cells or self replicating nanotechnology when you start, and you can’t develop them immediately just by thinking about it for thousands of years.
This specifically is an argument again sudden and unexpected “foom” the moment agi exists. If 20-50 years later in a world full of robots and rapid nanotechnology and programmable biology you start to see exponential progress that’s a different situation.
Projects that involve interplanetary transit are not part of the development I discuss, so they can’t slow it down. You don’t need to wait for paint to dry if you don’t use paint.
There are no additional pieces of infrastructure that need to be in place to make programmable cells, only their design and what modern biotech already has to manufacture some initial cells. It’s a question of sample efficiency in developing simulation tools, how many observations does it take for simulation tools to get good enough, if you had centuries to design the process of deciding what to observe and how to make use of the observations to improve the simulation tools.
So a crux might be impossibility of creating the simulation tools with data that can be collected in the modern world over a few months. It’s an issue distinct from inability to develop programmable cells.
I think you might have accidentally linked to your comment instead of the LessWrong post you intended to link to.