Really interesting post, I appreciate the thought experiment. I have one comment on it related to the Crystal Nights and Skunkworks sections, based on my own experience in the aerospace world. There are lots of problems that I deal with today where the limiting factor is the existence of high-quality experimental data (for example, propellant slosh dynamics in zero-g). This has two implications:
For the “Crystal Nights” example, I think that our current ability to build virtual worlds that are useful for evolutionarily creating truly transformative AIs may be more limited than you might think. A standard physics simulation-based environment is likely to not be that good a map of the real world. And a truly “bottom-up” simulation environment that recreated physics by simulating down at the molecular level would require a few orders of magnitude more computing power (and may run into similar issues with fidelity training data for modeling molecular interactions, though alphafold is evidence that this is not as great a limitation).
For the “Skunkworks” example, I think that you may run into similar problems where the returns to more computing power are greatly limited by the fidelity of the training data.
Now if fidelity of training data was the only thing holding Google et al. from making trillions off of AI in this world, there would be a very strong push to gather the necessary data. But that kind of work in the physical world tends to move more slowly and could well push the timelines required for these two applications past the 4-year mark. I couldn’t find similar objections to the other three.
Thanks! Yeah, I basically agree with you overall that Skunkworks could be undermined by our lack of understanding of real-physics dynamics. We certainly wouldn’t be able to create perfectly accurate simulations even if we threw all 35 OOMs at the problem. The question is whether the simulations would be accurate enough to be useful. My argument is that since we already have simulations which are accurate enough to be useful, adding +12 OOMs should lead to simulations which are even more useful. But useful enough to lead to crazy transformative stuff? Yeah, I don’t know.
For Crystal Nights, I’m more ‘optimistic.’ The simulation doesn’t have to be accurate at all really. It just has to be complex enough, in the right sort of ways. If you read the Crystal Nights short story I linked, it involves creating a virtual world and evolving creatures in it, but the creators don’t even try to make the physics accurate; they deliberately redesign the physics to be both (a) easier to compute and (b) more likely to lead to intelligent life evolving.
Your comment about Crystal Nights makes sense. I guess humans have evolved in a word based on one set of physical laws, but we’re general purpose intelligences that can do things like play videogames really well even when the game’s physics don’t match the real world’s.
Really interesting post, I appreciate the thought experiment. I have one comment on it related to the Crystal Nights and Skunkworks sections, based on my own experience in the aerospace world. There are lots of problems that I deal with today where the limiting factor is the existence of high-quality experimental data (for example, propellant slosh dynamics in zero-g). This has two implications:
For the “Crystal Nights” example, I think that our current ability to build virtual worlds that are useful for evolutionarily creating truly transformative AIs may be more limited than you might think. A standard physics simulation-based environment is likely to not be that good a map of the real world. And a truly “bottom-up” simulation environment that recreated physics by simulating down at the molecular level would require a few orders of magnitude more computing power (and may run into similar issues with fidelity training data for modeling molecular interactions, though alphafold is evidence that this is not as great a limitation).
For the “Skunkworks” example, I think that you may run into similar problems where the returns to more computing power are greatly limited by the fidelity of the training data.
Now if fidelity of training data was the only thing holding Google et al. from making trillions off of AI in this world, there would be a very strong push to gather the necessary data. But that kind of work in the physical world tends to move more slowly and could well push the timelines required for these two applications past the 4-year mark. I couldn’t find similar objections to the other three.
Thanks! Yeah, I basically agree with you overall that Skunkworks could be undermined by our lack of understanding of real-physics dynamics. We certainly wouldn’t be able to create perfectly accurate simulations even if we threw all 35 OOMs at the problem. The question is whether the simulations would be accurate enough to be useful. My argument is that since we already have simulations which are accurate enough to be useful, adding +12 OOMs should lead to simulations which are even more useful. But useful enough to lead to crazy transformative stuff? Yeah, I don’t know.
For Crystal Nights, I’m more ‘optimistic.’ The simulation doesn’t have to be accurate at all really. It just has to be complex enough, in the right sort of ways. If you read the Crystal Nights short story I linked, it involves creating a virtual world and evolving creatures in it, but the creators don’t even try to make the physics accurate; they deliberately redesign the physics to be both (a) easier to compute and (b) more likely to lead to intelligent life evolving.
Your comment about Crystal Nights makes sense. I guess humans have evolved in a word based on one set of physical laws, but we’re general purpose intelligences that can do things like play videogames really well even when the game’s physics don’t match the real world’s.