Yes! If UDT solves this problem, that’s extremely good news. I mention the possibility here. Unfortunately, I (and several others) don’t understand UDT well enough to tease out all the pros and cons of this approach. It might take a workshop to build a full consensus about whether it solves the problem, as opposed to just reframing it in new terms. (And, if it’s a reframing, how much it deepens our understanding.)
Do you have any specific questions about UDT that I can help answer? MIRI has held two decision theory workshops that I attended, and AFAIK nobody had a lot of difficulty understanding UDT, or thought that the UDT approach would have trouble with the kind of problem that you are describing in this sequence. It doesn’t seem very likely to me that someone would hold another workshop specifically to answer whether UDT handles this problem correctly, so I think our best bet is to just hash it out in this forum. (If we run into a lot of trouble communicating, we can always try something else at that point.)
(If you want to do this after your next post, then go ahead, but again it seems like you may be putting a lot of time and effort into writing this sequence, whereas if you spent a bit more time on UDT first, maybe you’d go “ok, this looks like a solved problem, let’s move on at least for now.” It’s not like there’s a shortage of other interesting and important problems to work on or introduce to people.)
Part of the goal of this sequence is to put introductory material about this problem in a single place, to get new workshop attendees and LWers on the same page faster.
I guess part of what’s making me think “you seem to be spending too much time on this” is that the problems/defects you’re describing with the AIXI approach here seem really obvious (at least in comparison to some other FAI-related problems), such that if somebody couldn’t see them right away or understand them in a few paragraphs, I think it’s pretty unlikely that they’d be able to contribute much to the kinds of problems that I’m interested in now.
AFAIK nobody had a lot of difficulty understanding UDT, or thought that the UDT approach would have trouble with the kind of problem that you are describing in this sequence.
For what it’s worth, I had a similar impression before, but now I suspect that either Eliezer doesn’t understand how UDT deals with that problem, or he has some objection that I don’t understand. That may or may not have something to do with his insistence on using causal models, which I also don’t understand.
Do you have any specific questions about UDT that I can help answer? MIRI has held two decision theory workshops that I attended, and AFAIK nobody had a lot of difficulty understanding UDT, or thought that the UDT approach would have trouble with the kind of problem that you are describing in this sequence. It doesn’t seem very likely to me that someone would hold another workshop specifically to answer whether UDT handles this problem correctly, so I think our best bet is to just hash it out in this forum. (If we run into a lot of trouble communicating, we can always try something else at that point.)
(If you want to do this after your next post, then go ahead, but again it seems like you may be putting a lot of time and effort into writing this sequence, whereas if you spent a bit more time on UDT first, maybe you’d go “ok, this looks like a solved problem, let’s move on at least for now.” It’s not like there’s a shortage of other interesting and important problems to work on or introduce to people.)
I guess part of what’s making me think “you seem to be spending too much time on this” is that the problems/defects you’re describing with the AIXI approach here seem really obvious (at least in comparison to some other FAI-related problems), such that if somebody couldn’t see them right away or understand them in a few paragraphs, I think it’s pretty unlikely that they’d be able to contribute much to the kinds of problems that I’m interested in now.
For what it’s worth, I had a similar impression before, but now I suspect that either Eliezer doesn’t understand how UDT deals with that problem, or he has some objection that I don’t understand. That may or may not have something to do with his insistence on using causal models, which I also don’t understand.