Your post seems to disagree with several empirically based lesswrong posts. Since your model of the capabilities of simulations is wrong, why should anyone believe ASIs will be exempt? Analysis follows:
Mathematically shows that it’s impossible to model a game of pinball well enough to predict it at all. Note that if this is an unknown pinball machine—it’s not a perfect ideal one, but there are irregularities in the table, wear on the bumpers, and so on—then even an ASI with a simulator cannot actually solve this game of pinball. It will need to play it some.
If you think about the pinball problem in more detail—“give it 5 minutes”—you will realize that brute force playing thousands of games isn’t needed. To know about the irregularities of the tabletop, you need the ball to travel over all of the tabletop, from probably several different directions and speeds, and observe it’s motion with a camera. To know about hidden flaws in the bumpers you likely need impacts from different angles and speeds.
Conclusion: you won’t need the thousands of games a human player will need to get good at a particular pinball table, but you will need to play enough games on a given table or collect data from it using sensors not available to humans (and not published online in any database, you will have to get humans to setup the sensors over the table or send robots equipped with the sensors). Without this information, if the task is “achieve expert level performance on this pinball table, zero shot, with nothing but a photo of the table” , the task is impossible. No ASI, even an “infinite superintelligence”, can solve.
What these lesswrong posts are showing is that in known domains, simulation inaccurate enough that it’s infeasible with any computer built with current technology, especially for nanoscale domains.
In essence, because electron interactions scale exponentially, it is less expensive to build your apparatus and test it using the universe’s sim engine that it is to attempt to simulate any large system at the nanoscale.
This is a general disproof of :
An AGI cannot invent nanotechnology/brain-hacking/robotics/[insert speculative technology] just from the data already available to humanity, then use its newfound understanding to build nanofactories/take over the world/whatever on the first try.
Unless you can show errors in the above posts, this is impossible. Well, sufficiently impossible that the odds are less than the 1 in 3 million odds for the Manhattan project.
Note I have thought a bit about how an ASI could solve this and the answer is similar to the above case for pinball. Rather than build trillions of possible nanostructures and measure their properties (similar to the idea of having to play 1-10k or more games on a single pinball table to know it), you could build a library of nanostructures, measure them, and predict the properties of many more by affine and other transforms. You could also build quantum computers that essentially predict electron interactions because the quantum computer itself has set up an analogous electron cloud and you then sample it’s properties.
So you can reduce the number of experiments needed from what humans would require, especially as there are less mistakes made and less duplicate research. It is similar to how a PBR materials scanner takes the minimum number of photos to fully capture it’s properties, or how a lidar scanner only obtains enough points to fully scan a surface plus overcome noise.
Unless you can show an error though, reducing the number of experiments to zero is impossible and we can bet the planet on that.
I think I’ve sufficiently disproven your post entirely and look forward to a response.
Afterword: Note that this entire argument is about the path to the endgame. Obviously, once an ASI has very large quantum computers available, it likely can predict the exact behavior of nanoscale structures (including proteins) so long as the problem fits within the qbit limit of the particular machine. Once it has nanoforges, it can order them to self replicate until there are many trillions of them available, then use them to manufacture the setup for whatever experiment the machine wants to perform. Once it has access to neural lace data (from a device similar to a neuralink), it probably is possible to find out if there are argument strategies that reliably convince humans to act against their own interests. And so on.
We’re talking about the difference between “ASI can compress 500 years of R&D into 5 weeks” and “ASI can compress 500 years of R&D into 50 years”. Final state’s the same.
Conclusion: you won’t need the thousands of games a human player will need to get good at a particular pinball table, but you will need to play enough games on a given table or collect data from it using sensors not available to humans (and not published online in any database, you will have to get humans to setup the sensors over the table or send robots equipped with the sensors).
There’s the crucial difference from the nanotech case: there is plenty of data available online about that specific pinball table. The laws of physics are much simpler than the detailed structure of a given table, and everything leaks data about them, everything constraints their possible shape. And we haven’t squeezed every bit of evidence about them from the data already available to us.
As an illustrative example, consider AlphaFold. It was able to largely solve protein folding from the datasets already available to us — it was able to squeeze more data out of them than we were able to. On the flip side, this implies that those datasets already constrained the protein-folding algorithm uniquely enough that it was inferrable — we just didn’t manage to do it on our own.
It is, of course, a question of informal judgements, but I don’t think there’s a strong case for assuming that this doesn’t extrapolate. That a very similar problem of nanotechnology design isn’t, likewise, already uniquely or near-uniquely constrained by the available data.
… That wasn’t really the core of my argument, though. The core is that practical experience is only useful inasmuch as it informs you about the environment structure, and if you can gather the information about the environment structure in other ways (sensors analysing the pinball table), no practical experience is needed. Which you seem to agree with.
The laws of physics are much simpler than the detailed structure of a given table
It is not practical to simulate everything down to the level of the laws of physics. In practice, you usually have to come up with much coarser models that can actually be computed within a reasonable time and most of the experimentation is needed to construct those models in the first place so that they align sufficiently with reality, and even then only in certain circumstances.
You could maybe use quantum mechanics to calculate the planetary orbits out for thousands of years, but it’s much simpler to use Newtonian mechanics for that, and that’s because the planetary motions happen to be easily modelable in that way, which however isn’t true for building rocket engines, or predicting the stock market or global politics.
Your post seems to disagree with several empirically based lesswrong posts. Since your model of the capabilities of simulations is wrong, why should anyone believe ASIs will be exempt? Analysis follows:
https://blog.aiimpacts.org/p/you-cant-predict-a-game-of-pinball
Mathematically shows that it’s impossible to model a game of pinball well enough to predict it at all. Note that if this is an unknown pinball machine—it’s not a perfect ideal one, but there are irregularities in the table, wear on the bumpers, and so on—then even an ASI with a simulator cannot actually solve this game of pinball. It will need to play it some.
If you think about the pinball problem in more detail—“give it 5 minutes”—you will realize that brute force playing thousands of games isn’t needed. To know about the irregularities of the tabletop, you need the ball to travel over all of the tabletop, from probably several different directions and speeds, and observe it’s motion with a camera. To know about hidden flaws in the bumpers you likely need impacts from different angles and speeds.
There are a variety of microscope scanning techniques that work like the above. This is also similar to how PBR material scanning is done (example link https://www.a23d.co/blog/pbr-texture-scanning/)
Conclusion: you won’t need the thousands of games a human player will need to get good at a particular pinball table, but you will need to play enough games on a given table or collect data from it using sensors not available to humans (and not published online in any database, you will have to get humans to setup the sensors over the table or send robots equipped with the sensors). Without this information, if the task is “achieve expert level performance on this pinball table, zero shot, with nothing but a photo of the table” , the task is impossible. No ASI, even an “infinite superintelligence”, can solve.
This extends in a general sense, https://www.lesswrong.com/posts/qpgkttrxkvGrH9BRr/superintelligence-is-not-omniscience and https://www.lesswrong.com/posts/etYGFJtawKQHcphLi/bandgaps-brains-and-bioweapons-the-limitations-of
What these lesswrong posts are showing is that in known domains, simulation inaccurate enough that it’s infeasible with any computer built with current technology, especially for nanoscale domains.
In essence, because electron interactions scale exponentially, it is less expensive to build your apparatus and test it using the universe’s sim engine that it is to attempt to simulate any large system at the nanoscale.
This is a general disproof of :
An AGI cannot invent nanotechnology/brain-hacking/robotics/[insert speculative technology] just from the data already available to humanity, then use its newfound understanding to build nanofactories/take over the world/whatever on the first try.
Unless you can show errors in the above posts, this is impossible. Well, sufficiently impossible that the odds are less than the 1 in 3 million odds for the Manhattan project.
Note I have thought a bit about how an ASI could solve this and the answer is similar to the above case for pinball. Rather than build trillions of possible nanostructures and measure their properties (similar to the idea of having to play 1-10k or more games on a single pinball table to know it), you could build a library of nanostructures, measure them, and predict the properties of many more by affine and other transforms. You could also build quantum computers that essentially predict electron interactions because the quantum computer itself has set up an analogous electron cloud and you then sample it’s properties.
So you can reduce the number of experiments needed from what humans would require, especially as there are less mistakes made and less duplicate research. It is similar to how a PBR materials scanner takes the minimum number of photos to fully capture it’s properties, or how a lidar scanner only obtains enough points to fully scan a surface plus overcome noise.
Unless you can show an error though, reducing the number of experiments to zero is impossible and we can bet the planet on that.
I think I’ve sufficiently disproven your post entirely and look forward to a response.
Afterword: Note that this entire argument is about the path to the endgame. Obviously, once an ASI has very large quantum computers available, it likely can predict the exact behavior of nanoscale structures (including proteins) so long as the problem fits within the qbit limit of the particular machine. Once it has nanoforges, it can order them to self replicate until there are many trillions of them available, then use them to manufacture the setup for whatever experiment the machine wants to perform. Once it has access to neural lace data (from a device similar to a neuralink), it probably is possible to find out if there are argument strategies that reliably convince humans to act against their own interests. And so on.
We’re talking about the difference between “ASI can compress 500 years of R&D into 5 weeks” and “ASI can compress 500 years of R&D into 50 years”. Final state’s the same.
There’s the crucial difference from the nanotech case: there is plenty of data available online about that specific pinball table. The laws of physics are much simpler than the detailed structure of a given table, and everything leaks data about them, everything constraints their possible shape. And we haven’t squeezed every bit of evidence about them from the data already available to us.
As an illustrative example, consider AlphaFold. It was able to largely solve protein folding from the datasets already available to us — it was able to squeeze more data out of them than we were able to. On the flip side, this implies that those datasets already constrained the protein-folding algorithm uniquely enough that it was inferrable — we just didn’t manage to do it on our own.
It is, of course, a question of informal judgements, but I don’t think there’s a strong case for assuming that this doesn’t extrapolate. That a very similar problem of nanotechnology design isn’t, likewise, already uniquely or near-uniquely constrained by the available data.
… That wasn’t really the core of my argument, though. The core is that practical experience is only useful inasmuch as it informs you about the environment structure, and if you can gather the information about the environment structure in other ways (sensors analysing the pinball table), no practical experience is needed. Which you seem to agree with.
It is not practical to simulate everything down to the level of the laws of physics. In practice, you usually have to come up with much coarser models that can actually be computed within a reasonable time and most of the experimentation is needed to construct those models in the first place so that they align sufficiently with reality, and even then only in certain circumstances.
You could maybe use quantum mechanics to calculate the planetary orbits out for thousands of years, but it’s much simpler to use Newtonian mechanics for that, and that’s because the planetary motions happen to be easily modelable in that way, which however isn’t true for building rocket engines, or predicting the stock market or global politics.