I’ve been reading about logical induction. I read that logical induction was considered a breakthrough, but I’m having a hard understanding the significance of it. I’m having a hard time seeing how it outperforms what I call “the naive approach” to logical uncertainty. I imagine there is some sort of notable benefit of it I’m missing, so I would very much appreciate some feedback.
First, I’ll explain what I mean by “the naive approach”. Consider asking an AI developer with no special background in reasoning under logical uncertainty how to make an algorithm to come to accurate probability estimates to logical statements. I think that that the answer is that they would just use standard AI techniques to search through the space of reasonably efficient possible programs for generating probability assignments to logical statements, is reasonably simple relative to the amount of data to avoid overfitting, and has as high a predictive accuracy as possible. Then they would use this to make predictions about logical statements.
If you want, you can also make this approach cleaner by using some idealized induction system, like Solomonoff induction, instead of messy, regular machine learning techniques. I still consider this the naive approach.
It seems to me that the naive approach, being used with a sufficiently powerful optimization algorithm, would output similar probability assignments to logical induction.
Logical induction says to come up with probability assignments that, when imagined to be market prices, cannot be “exploited” by any efficiently-computable betting strategy.
But why wouldn’t the naive approach do the same thing? If there was an efficient strategy to exploit probability assignments an algorithm that would give, then I think you could make a new, more efficient but easily computable strategy that comes up with more accurate probability assignments to avoid the exploitation. And so the machine learning algorithm, if sufficiently powerful, could find it.
If one system for outputting probability assignments to logical statements could be exploited by an efficient strategy, a new system for outputting probability assignments could be made that performs better by adjusting prices so that the strategy can no longer exploit the market.
To see it another way, it seems to me that if there is some way to exploit the market, then that’s because there is some way to accurately and efficiently predict when the system’s pricing are wrong, and this could be used to form some pricing strategy that could exploit the agent. So if you instead use a different algorithm that’s like the original one but adjusted to avoid being exploitable by that strategy, that would make a program that outputs probability assignments with higher predictive accuracy. So a sufficiently powerful optimizer could find it with the naive approach.
Consider the possibility that the naive approach is used with a powerful-enough optimization algorithm that it can find the very best-performing efficient and non-overfitted strategy of predicting prices among its data. Its not clear to me how such an algorithm could be exploitable by a trader. Even if there were some problems in the initial algorithm learned, it further learning could avoid being exploited. Maybe there is still somehow some way to do some sort of minor exploitation to such a system, but it’s not clear how it could be done to any significant degree.
So, if I’m reasoning correctly, it seems that the naive approach could end up approximating logical induction anyways, or perhaps exactly perform it in the case of unlimited processing power.
I’ve been reading about logical induction. I read that logical induction was considered a breakthrough, but I’m having a hard understanding the significance of it. I’m having a hard time seeing how it outperforms what I call “the naive approach” to logical uncertainty. I imagine there is some sort of notable benefit of it I’m missing, so I would very much appreciate some feedback.
First, I’ll explain what I mean by “the naive approach”. Consider asking an AI developer with no special background in reasoning under logical uncertainty how to make an algorithm to come to accurate probability estimates to logical statements. I think that that the answer is that they would just use standard AI techniques to search through the space of reasonably efficient possible programs for generating probability assignments to logical statements, is reasonably simple relative to the amount of data to avoid overfitting, and has as high a predictive accuracy as possible. Then they would use this to make predictions about logical statements.
If you want, you can also make this approach cleaner by using some idealized induction system, like Solomonoff induction, instead of messy, regular machine learning techniques. I still consider this the naive approach.
It seems to me that the naive approach, being used with a sufficiently powerful optimization algorithm, would output similar probability assignments to logical induction.
Logical induction says to come up with probability assignments that, when imagined to be market prices, cannot be “exploited” by any efficiently-computable betting strategy.
But why wouldn’t the naive approach do the same thing? If there was an efficient strategy to exploit probability assignments an algorithm that would give, then I think you could make a new, more efficient but easily computable strategy that comes up with more accurate probability assignments to avoid the exploitation. And so the machine learning algorithm, if sufficiently powerful, could find it.
If one system for outputting probability assignments to logical statements could be exploited by an efficient strategy, a new system for outputting probability assignments could be made that performs better by adjusting prices so that the strategy can no longer exploit the market.
To see it another way, it seems to me that if there is some way to exploit the market, then that’s because there is some way to accurately and efficiently predict when the system’s pricing are wrong, and this could be used to form some pricing strategy that could exploit the agent. So if you instead use a different algorithm that’s like the original one but adjusted to avoid being exploitable by that strategy, that would make a program that outputs probability assignments with higher predictive accuracy. So a sufficiently powerful optimizer could find it with the naive approach.
Consider the possibility that the naive approach is used with a powerful-enough optimization algorithm that it can find the very best-performing efficient and non-overfitted strategy of predicting prices among its data. Its not clear to me how such an algorithm could be exploitable by a trader. Even if there were some problems in the initial algorithm learned, it further learning could avoid being exploited. Maybe there is still somehow some way to do some sort of minor exploitation to such a system, but it’s not clear how it could be done to any significant degree.
So, if I’m reasoning correctly, it seems that the naive approach could end up approximating logical induction anyways, or perhaps exactly perform it in the case of unlimited processing power.