Order-dependence and butterfly effects—knew about this and had it in mind when I wrote CEV, I think it should be in the text.
Counterfactual Mugging—check, I don’t think I was calling TDT a complete solution before then but the Counterfactual Mugging was a class of possibilities I hadn’t considered. (It does seem related to Parfit’s Hitchhiker which I knew was a problem.)
Solomonoff Induction—again, I think you may be overestimating how much weight I put on that in the first place. It’s not a workable AI answer for at least two obvious reasons I’m pretty sure I knew about from almost-day-one, (a) it’s uncomputable and (b) it can’t handle utility functions over the environment. However, your particular contributions about halting-oracles-shouldn’t-be-unimaginable did indeed influence me in toward my current notion of second-order logical natural induction over possible models of axioms in which you could be embedded. Albeit I stand by my old reply that Solomonoff Induction would encompass any computable predictions or learning you could do about halting oracles in the environment. (The problem of porting yourself onto any environmental object is something I already knew AIXI would fail at.)
Order-dependence and butterfly effects—knew about this and had it in mind when I wrote CEV, I think it should be in the text.
Ok, I checked the CEV writeup and you did mention these briefly. But that makes me unsure why you claimed to have solved metaethics. What should you do if your FAI comes back and says that your EV shows no coherence due to order dependence and butterfly effects (assuming it’s not some kind of implementation error)? If you’re not sure the answer is “nothing”, and you don’t have another answer, doesn’t that mean your solution (about the meaning of “should”) is at least incomplete, and possibly wrong?
Counterfactual Mugging—check, I don’t think I was calling TDT a complete solution before then but the Counterfactual Mugging was a class of possibilities I hadn’t considered. (It does seem related to Parfit’s Hitchhiker which I knew was a problem.)
You said that TDT solves Parfit’s Hitchhiker, so I don’t know if you would have kept looking for more problems related to Parfit’s Hitchhiker and eventually come upon Counterfactual Mugging.
Solomonoff Induction—again, I think you may be overestimating how much weight I put on that in the first place. It’s not a workable AI answer for at least two obvious reasons I’m pretty sure I knew about from almost-day-one, (a) it’s uncomputable and (b) it can’t handle utility functions over the environment
Both of these can be solved without also solving halting-oracles-shouldn’t-be-unimaginable. For (a), solve logical uncertainty. For (b), switch to UDT-with-world-programs.
Also, here is another problem that maybe you weren’t already aware of.
What should you do if your FAI comes back and says that your EV shows no coherence due to order dependence and butterfly effects (assuming it’s not some kind of implementation error)?
Wouldn’t that kind of make moral reasoning impossible?
Order-dependence and butterfly effects—knew about this and had it in mind when I wrote CEV, I think it should be in the text.
Counterfactual Mugging—check, I don’t think I was calling TDT a complete solution before then but the Counterfactual Mugging was a class of possibilities I hadn’t considered. (It does seem related to Parfit’s Hitchhiker which I knew was a problem.)
Solomonoff Induction—again, I think you may be overestimating how much weight I put on that in the first place. It’s not a workable AI answer for at least two obvious reasons I’m pretty sure I knew about from almost-day-one, (a) it’s uncomputable and (b) it can’t handle utility functions over the environment. However, your particular contributions about halting-oracles-shouldn’t-be-unimaginable did indeed influence me in toward my current notion of second-order logical natural induction over possible models of axioms in which you could be embedded. Albeit I stand by my old reply that Solomonoff Induction would encompass any computable predictions or learning you could do about halting oracles in the environment. (The problem of porting yourself onto any environmental object is something I already knew AIXI would fail at.)
Ok, I checked the CEV writeup and you did mention these briefly. But that makes me unsure why you claimed to have solved metaethics. What should you do if your FAI comes back and says that your EV shows no coherence due to order dependence and butterfly effects (assuming it’s not some kind of implementation error)? If you’re not sure the answer is “nothing”, and you don’t have another answer, doesn’t that mean your solution (about the meaning of “should”) is at least incomplete, and possibly wrong?
You said that TDT solves Parfit’s Hitchhiker, so I don’t know if you would have kept looking for more problems related to Parfit’s Hitchhiker and eventually come upon Counterfactual Mugging.
Both of these can be solved without also solving halting-oracles-shouldn’t-be-unimaginable. For (a), solve logical uncertainty. For (b), switch to UDT-with-world-programs.
Also, here is another problem that maybe you weren’t already aware of.
Wouldn’t that kind of make moral reasoning impossible?