Conclusion: you won’t need the thousands of games a human player will need to get good at a particular pinball table, but you will need to play enough games on a given table or collect data from it using sensors not available to humans (and not published online in any database, you will have to get humans to setup the sensors over the table or send robots equipped with the sensors).
There’s the crucial difference from the nanotech case: there is plenty of data available online about that specific pinball table. The laws of physics are much simpler than the detailed structure of a given table, and everything leaks data about them, everything constraints their possible shape. And we haven’t squeezed every bit of evidence about them from the data already available to us.
As an illustrative example, consider AlphaFold. It was able to largely solve protein folding from the datasets already available to us — it was able to squeeze more data out of them than we were able to. On the flip side, this implies that those datasets already constrained the protein-folding algorithm uniquely enough that it was inferrable — we just didn’t manage to do it on our own.
It is, of course, a question of informal judgements, but I don’t think there’s a strong case for assuming that this doesn’t extrapolate. That a very similar problem of nanotechnology design isn’t, likewise, already uniquely or near-uniquely constrained by the available data.
… That wasn’t really the core of my argument, though. The core is that practical experience is only useful inasmuch as it informs you about the environment structure, and if you can gather the information about the environment structure in other ways (sensors analysing the pinball table), no practical experience is needed. Which you seem to agree with.
The laws of physics are much simpler than the detailed structure of a given table
It is not practical to simulate everything down to the level of the laws of physics. In practice, you usually have to come up with much coarser models that can actually be computed within a reasonable time and most of the experimentation is needed to construct those models in the first place so that they align sufficiently with reality, and even then only in certain circumstances.
You could maybe use quantum mechanics to calculate the planetary orbits out for thousands of years, but it’s much simpler to use Newtonian mechanics for that, and that’s because the planetary motions happen to be easily modelable in that way, which however isn’t true for building rocket engines, or predicting the stock market or global politics.
There’s the crucial difference from the nanotech case: there is plenty of data available online about that specific pinball table. The laws of physics are much simpler than the detailed structure of a given table, and everything leaks data about them, everything constraints their possible shape. And we haven’t squeezed every bit of evidence about them from the data already available to us.
As an illustrative example, consider AlphaFold. It was able to largely solve protein folding from the datasets already available to us — it was able to squeeze more data out of them than we were able to. On the flip side, this implies that those datasets already constrained the protein-folding algorithm uniquely enough that it was inferrable — we just didn’t manage to do it on our own.
It is, of course, a question of informal judgements, but I don’t think there’s a strong case for assuming that this doesn’t extrapolate. That a very similar problem of nanotechnology design isn’t, likewise, already uniquely or near-uniquely constrained by the available data.
… That wasn’t really the core of my argument, though. The core is that practical experience is only useful inasmuch as it informs you about the environment structure, and if you can gather the information about the environment structure in other ways (sensors analysing the pinball table), no practical experience is needed. Which you seem to agree with.
It is not practical to simulate everything down to the level of the laws of physics. In practice, you usually have to come up with much coarser models that can actually be computed within a reasonable time and most of the experimentation is needed to construct those models in the first place so that they align sufficiently with reality, and even then only in certain circumstances.
You could maybe use quantum mechanics to calculate the planetary orbits out for thousands of years, but it’s much simpler to use Newtonian mechanics for that, and that’s because the planetary motions happen to be easily modelable in that way, which however isn’t true for building rocket engines, or predicting the stock market or global politics.