First, I’m not sure exactly why you think this is bad. Care to say more? My guess is that it just doesn’t fit the intuitive notion that updates should be heading toward some state of maximal knowledge. But we do fit this intuition in other ways; specifically, logical inductors eventually trust their future opinions more than their present opinions.
Personally, I found this result puzzling but far from damning.
Second, I’ve actually done some unpublished work on this. There is a variation of the logical induction criterion which is more relaxed (admits more things as rational), such that constant is ok. Let’s call this “weak logical induction”. However, it’s more similar to the original criterion than you might expect. (Credit to Sam Eisenstat for doing most of the work finding the proof.) In particular, iirc, any function from deductive process history to market prices (computable or not) which is a weak logical inductor for any deductive process is also a logical inductor in the original sense.
In other words, there is room to weaken the criterion, but doing so won’t broaden the class of algorithms satisfying the criterion (unless you’re happy to custom-tailor algorithms to specific deductive processes, which replaces induction with simple foreknowledge).
Putting it a different way, define “universal” LIC (ULIC) to be the property of satisfying the LIC for any deductive process. We can similarly define universal weak logical induction, UWLIC. It turns out that even though LIC and WLIC are different (WLIC allows constant inductors), their universal versions are not different (again, iirc. There could have been more technical assumptions on the theorem.).
I think the paper made a mistake by focusing on LIC rather than ULIC; Garrabrant induction is really only interesting because it’s universal.
Did the paperv also make a mistake by using LIC rather than WLIC? Maybe. I see no intuitive reason why our notion of rationality should be LIC rather than WLIC. Broader is better, if the specificity doesn’t get you anything you intuitively want. But the theorem I’m referring to shows that the damage is minimal, since we really want the universal versions anyway.
First, I’m not sure exactly why you think this is bad. Care to say more? My guess is that it just doesn’t fit the intuitive notion that updates should be heading toward some state of maximal knowledge. But we do fit this intuition in other ways; specifically, logical inductors eventually trust their future opinions more than their present opinions.
Personally, I found this result puzzling but far from damning.
Second, I’ve actually done some unpublished work on this. There is a variation of the logical induction criterion which is more relaxed (admits more things as rational), such that constant is ok. Let’s call this “weak logical induction”. However, it’s more similar to the original criterion than you might expect. (Credit to Sam Eisenstat for doing most of the work finding the proof.) In particular, iirc, any function from deductive process history to market prices (computable or not) which is a weak logical inductor for any deductive process is also a logical inductor in the original sense.
In other words, there is room to weaken the criterion, but doing so won’t broaden the class of algorithms satisfying the criterion (unless you’re happy to custom-tailor algorithms to specific deductive processes, which replaces induction with simple foreknowledge).
Putting it a different way, define “universal” LIC (ULIC) to be the property of satisfying the LIC for any deductive process. We can similarly define universal weak logical induction, UWLIC. It turns out that even though LIC and WLIC are different (WLIC allows constant inductors), their universal versions are not different (again, iirc. There could have been more technical assumptions on the theorem.).
I think the paper made a mistake by focusing on LIC rather than ULIC; Garrabrant induction is really only interesting because it’s universal.
Did the paperv also make a mistake by using LIC rather than WLIC? Maybe. I see no intuitive reason why our notion of rationality should be LIC rather than WLIC. Broader is better, if the specificity doesn’t get you anything you intuitively want. But the theorem I’m referring to shows that the damage is minimal, since we really want the universal versions anyway.
Interesting! Can you write up the WLIC, here or in a separate post?
I should! But I’ve got a lot of things to write up!
It also needs a better name, as there have been several things termed “weak logical induction” over time.
Did this ever get written up? I’m still interested in it.
Ah, not yet, no.