No this hardware would not make simulations faster. Different hardware could speed it up some, but since simulations already are done on supercomputers running at multiple ghz, the speedup would be about 1 OOM, as this is typical for going from general purpose processors to ASICs. It would still be the bottleneck.
This lesswrong post argues pretty convincingly that simulations cannot model everything, especially for behavior relevant to nanotechnology and medicine:
Assuming the physicist who wrote that lesswrong post is correct, cycles of trial and error and prototyping and experiments are unavoidable.
I also agree with the post for a different reason : real experimental data, such as human written papers on biology or nanoscale chemistry, leave enough uncertainty to fit trucks through. The issue is that you have hand copied fields of data, large withheld datasets because they had negative findings, needlessly vague language to describe what was done, different labs at different places with different staff and equipment, different subjects (current tech cannot simulate or build a living mockup of a human body and there is insufficient data due to the above to do either), and so on.
You have to try things, even if it’s just to collect data you will use in your simulation, and ‘trying stuff’ is slow. (mammalian cells take hours to weeks to grow complex structures. electron beam nanolathes take hours to carve a new structure. etc.)
When you design a thing, you can intentionally make it more predictable and faster to test, in particular with modularity. If the goal is designing cells that grow and change in controllable ways, all experiments are tiny. Like with machine learning, new observations from the experiments generalize by improving the simulation tools, not just object level designs. And much more advanced theory of learning should enable much better sample efficiency with respect to external data.
If millionfold speedup is currently already feasible, it doesn’t take hardware advancement and as a milestone indicates no hardware benefit for simulation. That point responded to the hypothetical where there is already massive scaling in hardware compared to today (such as through macroscopic biotech to scale physical infrastructure), which should as another consequence make simulation of physical designs much better (on its own hardware specialized for being good at simulation). For example, this is where I expect uploading to become feasible to develop, not at the 300x speedup stage of software-only improvement, because simulating wild systems is harder than designing something predictable.
(This is exploratory engineering not forecasting, I don’t actually expect human level AGI without superintelligence to persist that long, and if nanotech is possible I don’t expect scaling of macroscopic biotech. But neither seems crucial.)
When you design a thing, you can intentionally make it more predictable and faster to test,
Absolutely. This happens today, where there is only time in silicon release cycles for a few revisions.
My main point with the illustrative numbers was to show how the time complexity works. You have this million times faster AI—it can do 10 years of work in 2.24 minutes it seems.
(Assuming a human is working 996 or 72 hours a week)
Even if we take the most generous possible assumptions about how long it takes to build something real and test it, then fix your mistakes, the limiting factors are 43,000 times slower than we can think. Say we reduce our serial steps for testing and only need 2 prototypes and then the final version instead of 10.
So we made it 3 times faster!
So the real world is still slowing us down by a factor of 14,400.
Machining equipment takes time to cut an engine, nano lathe a part, or if we are growing human organs to treat VIPs it takes months for them to grow. Same for anything else you think of. Real world just has all these slow steps, from time for concrete to cure, paint to dry, molten metal in castings to cool, etc.
This is succinctly why FOOM is unlikely. But 5-50 years of research in 1 year is still absolutely game changing.
Machining equipment takes time to cut an engine, nano lathe a part, or if we are growing human organs to treat VIPs it takes months for them to grow.
That’s why you don’t do any of the slower things at all (in a blocking way), and instead focus on the critical path of controllable cells for macroscopic biotech or something like that, together with the experiments needed to train simulators good enough to design them. This enables exponentially scaling physical infrastructure once completed, which can be used to do all the other things. Simulation is not the methods of today, it’s all the computational shortcuts to making the correct predictions about the simulated systems that the AGIs can come up with in subjective centuries of thinking, with a few experimental observations to ground the thinking. And once the initial hardware scaling project is completed, it enables much better simulation of more complicated things.
You can speed things up. The main takeaway is there’s 4 orders of magnitude here. Some projects that involve things like interplanetary transits to setup are going to be even slower than that.
And you will most assuredly start out at 4 oom slower bootstrapping from today’s infrastructure. Yes maybe you can eventually develop all the things you mentioned, but there are upfront costs to develop them. You don’t have programmable cells or self replicating nanotechnology when you start, and you can’t develop them immediately just by thinking about it for thousands of years.
This specifically is an argument again sudden and unexpected “foom” the moment agi exists. If 20-50 years later in a world full of robots and rapid nanotechnology and programmable biology you start to see exponential progress that’s a different situation.
Projects that involve interplanetary transit are not part of the development I discuss, so they can’t slow it down. You don’t need to wait for paint to dry if you don’t use paint.
There are no additional pieces of infrastructure that need to be in place to make programmable cells, only their design and what modern biotech already has to manufacture some initial cells. It’s a question of sample efficiency in developing simulation tools, how many observations does it take for simulation tools to get good enough, if you had centuries to design the process of deciding what to observe and how to make use of the observations to improve the simulation tools.
So a crux might be impossibility of creating the simulation tools with data that can be collected in the modern world over a few months. It’s an issue distinct from inability to develop programmable cells.
Hardware: https://www.lesswrong.com/posts/adadYCPFAhNqDA5Ye/processor-clock-speeds-are-not-how-fast-ais-think?commentId=Nd8h72ZqSJfsMJK8M
No this hardware would not make simulations faster. Different hardware could speed it up some, but since simulations already are done on supercomputers running at multiple ghz, the speedup would be about 1 OOM, as this is typical for going from general purpose processors to ASICs. It would still be the bottleneck.
This lesswrong post argues pretty convincingly that simulations cannot model everything, especially for behavior relevant to nanotechnology and medicine:
https://www.lesswrong.com/posts/etYGFJtawKQHcphLi/bandgaps-brains-and-bioweapons-the-limitations-of
Assuming the physicist who wrote that lesswrong post is correct, cycles of trial and error and prototyping and experiments are unavoidable.
I also agree with the post for a different reason : real experimental data, such as human written papers on biology or nanoscale chemistry, leave enough uncertainty to fit trucks through. The issue is that you have hand copied fields of data, large withheld datasets because they had negative findings, needlessly vague language to describe what was done, different labs at different places with different staff and equipment, different subjects (current tech cannot simulate or build a living mockup of a human body and there is insufficient data due to the above to do either), and so on.
You have to try things, even if it’s just to collect data you will use in your simulation, and ‘trying stuff’ is slow. (mammalian cells take hours to weeks to grow complex structures. electron beam nanolathes take hours to carve a new structure. etc.)
When you design a thing, you can intentionally make it more predictable and faster to test, in particular with modularity. If the goal is designing cells that grow and change in controllable ways, all experiments are tiny. Like with machine learning, new observations from the experiments generalize by improving the simulation tools, not just object level designs. And much more advanced theory of learning should enable much better sample efficiency with respect to external data.
If millionfold speedup is currently already feasible, it doesn’t take hardware advancement and as a milestone indicates no hardware benefit for simulation. That point responded to the hypothetical where there is already massive scaling in hardware compared to today (such as through macroscopic biotech to scale physical infrastructure), which should as another consequence make simulation of physical designs much better (on its own hardware specialized for being good at simulation). For example, this is where I expect uploading to become feasible to develop, not at the 300x speedup stage of software-only improvement, because simulating wild systems is harder than designing something predictable.
(This is exploratory engineering not forecasting, I don’t actually expect human level AGI without superintelligence to persist that long, and if nanotech is possible I don’t expect scaling of macroscopic biotech. But neither seems crucial.)
Absolutely. This happens today, where there is only time in silicon release cycles for a few revisions.
My main point with the illustrative numbers was to show how the time complexity works. You have this million times faster AI—it can do 10 years of work in 2.24 minutes it seems.
(Assuming a human is working 996 or 72 hours a week)
Even if we take the most generous possible assumptions about how long it takes to build something real and test it, then fix your mistakes, the limiting factors are 43,000 times slower than we can think. Say we reduce our serial steps for testing and only need 2 prototypes and then the final version instead of 10.
So we made it 3 times faster!
So the real world is still slowing us down by a factor of 14,400.
Machining equipment takes time to cut an engine, nano lathe a part, or if we are growing human organs to treat VIPs it takes months for them to grow. Same for anything else you think of. Real world just has all these slow steps, from time for concrete to cure, paint to dry, molten metal in castings to cool, etc.
This is succinctly why FOOM is unlikely. But 5-50 years of research in 1 year is still absolutely game changing.
That’s why you don’t do any of the slower things at all (in a blocking way), and instead focus on the critical path of controllable cells for macroscopic biotech or something like that, together with the experiments needed to train simulators good enough to design them. This enables exponentially scaling physical infrastructure once completed, which can be used to do all the other things. Simulation is not the methods of today, it’s all the computational shortcuts to making the correct predictions about the simulated systems that the AGIs can come up with in subjective centuries of thinking, with a few experimental observations to ground the thinking. And once the initial hardware scaling project is completed, it enables much better simulation of more complicated things.
You can speed things up. The main takeaway is there’s 4 orders of magnitude here. Some projects that involve things like interplanetary transits to setup are going to be even slower than that.
And you will most assuredly start out at 4 oom slower bootstrapping from today’s infrastructure. Yes maybe you can eventually develop all the things you mentioned, but there are upfront costs to develop them. You don’t have programmable cells or self replicating nanotechnology when you start, and you can’t develop them immediately just by thinking about it for thousands of years.
This specifically is an argument again sudden and unexpected “foom” the moment agi exists. If 20-50 years later in a world full of robots and rapid nanotechnology and programmable biology you start to see exponential progress that’s a different situation.
Projects that involve interplanetary transit are not part of the development I discuss, so they can’t slow it down. You don’t need to wait for paint to dry if you don’t use paint.
There are no additional pieces of infrastructure that need to be in place to make programmable cells, only their design and what modern biotech already has to manufacture some initial cells. It’s a question of sample efficiency in developing simulation tools, how many observations does it take for simulation tools to get good enough, if you had centuries to design the process of deciding what to observe and how to make use of the observations to improve the simulation tools.
So a crux might be impossibility of creating the simulation tools with data that can be collected in the modern world over a few months. It’s an issue distinct from inability to develop programmable cells.
I think you might have accidentally linked to your comment instead of the LessWrong post you intended to link to.