If it’s only trained to solve arithmetic and there are no additional sensory modalities aside from the buttons on a typical calculator, how does increasing this AI’s compute/power lead to it becoming an optimizer over a wider domain than just arithmetic?
That was a poetic turn of phrase, yeah. I didn’t mean a literal arithmetic calculator, I meant general-purpose theorem-provers/math engines. Given a sufficiently difficult task, such a model may need to invent and abstract over entire new fields of mathematics, to solve it in a compute-efficient manner. And that capability goes hand-in-hand with runtime optimization.
Do you think it might be valuable to find a theoretical limit that shows that the amount of compute needed for such epsilon-details to be usefully incorporated is greater than ever will be feasible (or not)?
I think something like this was on the list of John’s plans for empirical tests of the NAH, yes. In the meantime, my understanding is that the NAH explicitly hinges on assuming this is true.
Which is to say: Yes, an AI may discover novel, lower-level abstractions, but then it’d use them in concert with the interpretable higher-level ones. It wouldn’t replace high-level abstractions with low-level ones, because the high-level abstractions are already as efficient as they get for the tasks we use them for.
You could dip down to a lower level when optimizing some specific action — like fine-tuning the aim of your energy weapon to fry a given person’s brain with maximum efficiency — but when you’re selecting the highest-priority person to kill to cause most disarray, you’d be thinking about “humans” in the context of “social groups”, explicitly. The alternative — modeling the individual atoms bouncing around — would be dramatically more expensive, while not improving your predictions much, if at all.
It’s analogous to how we’re still using Newton’s laws in some cases, despite in principle having ample compute to model things at a lower level. There’s just no point.
That was a poetic turn of phrase, yeah. I didn’t mean a literal arithmetic calculator, I meant general-purpose theorem-provers/math engines. Given a sufficiently difficult task, such a model may need to invent and abstract over entire new fields of mathematics, to solve it in a compute-efficient manner. And that capability goes hand-in-hand with runtime optimization.
I think something like this was on the list of John’s plans for empirical tests of the NAH, yes. In the meantime, my understanding is that the NAH explicitly hinges on assuming this is true.
Which is to say: Yes, an AI may discover novel, lower-level abstractions, but then it’d use them in concert with the interpretable higher-level ones. It wouldn’t replace high-level abstractions with low-level ones, because the high-level abstractions are already as efficient as they get for the tasks we use them for.
You could dip down to a lower level when optimizing some specific action — like fine-tuning the aim of your energy weapon to fry a given person’s brain with maximum efficiency — but when you’re selecting the highest-priority person to kill to cause most disarray, you’d be thinking about “humans” in the context of “social groups”, explicitly. The alternative — modeling the individual atoms bouncing around — would be dramatically more expensive, while not improving your predictions much, if at all.
It’s analogous to how we’re still using Newton’s laws in some cases, despite in principle having ample compute to model things at a lower level. There’s just no point.
Thanks so much for the response, this is all clear now!