About 2TDT-1CDT Wei didn’t seem to consider it 100% solved, as of this August or September if I recall right. You’ll have to ask him.
About ASP I agree with Gary: we do not yet completely understand the implications of the fact that a human like me can win in this situation, while UDT can’t.
About A/B/~CON I’d like to see some sort of mechanical reasoning procedure that leads to the answer. You do remember that Wei’s “existential” patch has been shown to not work, and my previous algorithm without that patch can’t handle this particular problem, right?
(For onlookers: this exchange refers to a whole lot of previous discussion on the decision-theory-workshop mailing list. Read at your own risk.)
About ASP I agree with Gary: we do not yet completely understand the implications of the fact that a human like me can win in this situation, while UDT can’t.
Both outcomes are stipulated in the corresponding unrelated decision problems. This is an example of explicit dependency bias, where you consider a collection of problem statements indexed by agents’ algorithms, or agents’ decisions in an arbitrary way. Nothing follows from there being a collection with so and so consequences of picking a certain element of it. Relation between the agents and problem statements connected in such a collection is epiphenomenal to agents’ adequacy. I should probably write up a post to that effect. Only ambient consequences count, where you are already the agent that is part of (state of knowledge about) an environment and need to figure out what to do, for example which AI to construct and submit your decision to. Otherwise you are changing the problem, not reasoning about what to do in a given problem.
About A/B/~CON I’d like to see some sort of mechanical reasoning procedure that leads to the answer. You do remember that Wei’s “existential” fix has been shown to not work, and my previous algorithm without that fix can’t handle this particular problem, right?
You can infer that A=>U \in {5,6} and B=>U \in {10,11}. Then, instead of only recognizing moral arguments of the form A=>U=U1, you need to be able to recognize such more general arguments. It’s clear which of the two to pick.
You can infer that A=>U \in {5,6} and B=>U \in {10,11}. Then, instead of only recognizing moral arguments of the form A=>U=U1, you need to be able to recognize such more general arguments. It’s clear which of the two to pick.
Is that the only basis on which UDT or a UDT-like algorithm would decide on such a problem? What about a variant where action A gives you $5, plus $6 iff it is ever proved that P≠NP, and action B gives you $10, plus $5 iff P=NP is ever proved? Here too you could say that A=>U \in {5,11} and B=>U \in {10,15}, but A is probably preferable.
About 2TDT-1CDT Wei didn’t seem to consider it 100% solved, as of this August or September if I recall right. You’ll have to ask him.
About ASP I agree with Gary: we do not yet completely understand the implications of the fact that a human like me can win in this situation, while UDT can’t.
About A/B/~CON I’d like to see some sort of mechanical reasoning procedure that leads to the answer. You do remember that Wei’s “existential” patch has been shown to not work, and my previous algorithm without that patch can’t handle this particular problem, right?
(For onlookers: this exchange refers to a whole lot of previous discussion on the decision-theory-workshop mailing list. Read at your own risk.)
Both outcomes are stipulated in the corresponding unrelated decision problems. This is an example of explicit dependency bias, where you consider a collection of problem statements indexed by agents’ algorithms, or agents’ decisions in an arbitrary way. Nothing follows from there being a collection with so and so consequences of picking a certain element of it. Relation between the agents and problem statements connected in such a collection is epiphenomenal to agents’ adequacy. I should probably write up a post to that effect. Only ambient consequences count, where you are already the agent that is part of (state of knowledge about) an environment and need to figure out what to do, for example which AI to construct and submit your decision to. Otherwise you are changing the problem, not reasoning about what to do in a given problem.
You can infer that A=>U \in {5,6} and B=>U \in {10,11}. Then, instead of only recognizing moral arguments of the form A=>U=U1, you need to be able to recognize such more general arguments. It’s clear which of the two to pick.
Is that the only basis on which UDT or a UDT-like algorithm would decide on such a problem? What about a variant where action A gives you $5, plus $6 iff it is ever proved that P≠NP, and action B gives you $10, plus $5 iff P=NP is ever proved? Here too you could say that A=>U \in {5,11} and B=>U \in {10,15}, but A is probably preferable.