I’ll try respond properly later this week, but I like the point that embedded agency is about boundedness. Nevertheless, I think we probably disagree about how promising it is “to start with idealized rationality and try to drag it down to Earth rather than the other way around”. If the starting point is incoherent, then this approach doesn’t seem like it’ll go far—if AIXI isn’t useful to study, then probably AIXItl isn’t either (although take this particular example with a grain of salt, since I know almost nothing about AIXItl).
I appreciate that this isn’t an argument that I’ve made in a thorough or compelling way yet—I’m working on a post which does so.
If the starting point is incoherent, then this approach doesn’t seem like it’ll go far—if AIXI isn’t useful to study, then probably AIXItl isn’t either (although take this particular example with a grain of salt, since I know almost nothing about AIXItl).
Hm. I already think the starting point of Bayesian decision theory (which is even “further up” than AIXI in how I am thinking about it) is fairly useful.
In a naive sort of way, people can handle uncertain gambles by choosing a quantity to treat as ‘utility’ (such as money), quantifying probabilities of outcomes, and taking expected values. This doesn’t always serve very well (e.g. one might prefer Kelley betting), but it was kind of the starting point (probability theory getting its starting point from gambling games) and the idea seems like a useful decision-making mechanism in a lot of situations.
Perhaps more convincingly, probability theory seems extremely useful, both as a precise tool for statisticians and as a somewhat looser analogy for thinking about everyday life, cognitive biases, etc.
AIXI adds to all this the idea of quantifying Occam’s razor with algorithmic information theory, which seems to be a very fruitful idea. But I guess this is the sort of thing we’re going to disagree on.
As for AIXItl, I think it’s sort of taking the wrong approach to “dragging things down to earth”. Logical induction simultaneously makes things computable and solves a new set of interesting problems having to do with accomplishing that. AIXItl feels more like trying to stuff an uncomputable peg into a computable hole.
I’ll try respond properly later this week, but I like the point that embedded agency is about boundedness. Nevertheless, I think we probably disagree about how promising it is “to start with idealized rationality and try to drag it down to Earth rather than the other way around”. If the starting point is incoherent, then this approach doesn’t seem like it’ll go far—if AIXI isn’t useful to study, then probably AIXItl isn’t either (although take this particular example with a grain of salt, since I know almost nothing about AIXItl).
I appreciate that this isn’t an argument that I’ve made in a thorough or compelling way yet—I’m working on a post which does so.
Hm. I already think the starting point of Bayesian decision theory (which is even “further up” than AIXI in how I am thinking about it) is fairly useful.
In a naive sort of way, people can handle uncertain gambles by choosing a quantity to treat as ‘utility’ (such as money), quantifying probabilities of outcomes, and taking expected values. This doesn’t always serve very well (e.g. one might prefer Kelley betting), but it was kind of the starting point (probability theory getting its starting point from gambling games) and the idea seems like a useful decision-making mechanism in a lot of situations.
Perhaps more convincingly, probability theory seems extremely useful, both as a precise tool for statisticians and as a somewhat looser analogy for thinking about everyday life, cognitive biases, etc.
AIXI adds to all this the idea of quantifying Occam’s razor with algorithmic information theory, which seems to be a very fruitful idea. But I guess this is the sort of thing we’re going to disagree on.
As for AIXItl, I think it’s sort of taking the wrong approach to “dragging things down to earth”. Logical induction simultaneously makes things computable and solves a new set of interesting problems having to do with accomplishing that. AIXItl feels more like trying to stuff an uncomputable peg into a computable hole.