I always assumed the original apple frames and grass quote to be...maybe not a metaphor, but at least acknowledged as a theoretical rather than practical ideal. What a hypercomputer executing Solomonoff induction might be able to accomplish.
The actual feat of reasoning described in the story itself is that an entire civilization of people approaching the known-attainable upper reaches of human intelligence, with all the past data and experience that entails, devoting its entire thought and compute budget for decades towards what amounts to a single token prediction problem with a prompt of a few MB in size.
I think we can agree that those are, at least, sufficiently wide upper and lower bounds for what would be required in practice to solve the Alien Physics problem in the story.
Everything else, the parts about spending half a billion subjective years persuading them to let us out of the simulation, is irrelevant to that question. So what really is the practical limit? How much new input to how big a pre-existing model? I don’t know. But I do know that while humans have access to lots of data during our development, we throw almost all of it away, and don’t have anywhere near enough compute to make thorough use of what’s left. Which in turn means the limit of learning with the same data and higher compute should be much faster than human.
In any case, an AI doesn’t need to be anywhere near the theoretical limit, in a world where readily available sources of data online include tens of thousands of years of video and audio, and hundreds of terabytes of text.
I always assumed the original apple frames and grass quote to be...maybe not a metaphor, but at least acknowledged as a theoretical rather than practical ideal. What a hypercomputer executing Solomonoff induction might be able to accomplish.
The actual feat of reasoning described in the story itself is that an entire civilization of people approaching the known-attainable upper reaches of human intelligence, with all the past data and experience that entails, devoting its entire thought and compute budget for decades towards what amounts to a single token prediction problem with a prompt of a few MB in size.
I think we can agree that those are, at least, sufficiently wide upper and lower bounds for what would be required in practice to solve the Alien Physics problem in the story.
Everything else, the parts about spending half a billion subjective years persuading them to let us out of the simulation, is irrelevant to that question. So what really is the practical limit? How much new input to how big a pre-existing model? I don’t know. But I do know that while humans have access to lots of data during our development, we throw almost all of it away, and don’t have anywhere near enough compute to make thorough use of what’s left. Which in turn means the limit of learning with the same data and higher compute should be much faster than human.
In any case, an AI doesn’t need to be anywhere near the theoretical limit, in a world where readily available sources of data online include tens of thousands of years of video and audio, and hundreds of terabytes of text.