and A is the action of deciding to not smoke for the purpose of avoiding cancer.
People aren’t that good and understanding why they do things. It might seem like you decided not to smoke because of EDT, but the more you want to smoke, the less likely you are to follow that line of reasoning.
One of the first things we should expect a self-modifying EDT agent to do, is to make a blanket precommitment for all such problems.
It wouldn’t. It would make a blanket precommitment for all such problems that have not already started. It would treat all problems currently in motion differently. Essentially, the ability to modify itself means that it can pick every future choice right now, but if “right now” is half way through the problem, that’s not going to matter a whole lot.
It would make a blanket precommitment for all such problems that have not already started.
What do you mean by “already started”? An EDT agent doesn’t really care about time, because it doesn’t care about causation. So it will act to control events that have already happened chronologically, as long as it doesn’t know how they turned out.
People aren’t that good at understanding why they do things. It might seem like you decided not to smoke because of EDT, but the more you want to smoke, the less likely you are to follow that line of reasoning.
This is a great point. But it’s an issue for us imperfect humans more than for a true EDT agent. This is why my default is to avoid activities associated with higher mortality, until I have good reason to think that, for whatever reason, my participation in the activity wouldn’t be inauspicious, after accounting for other things that I know.
What do you mean by “already started”? An EDT agent doesn’t really care about time, because it doesn’t care about causation.
Take Parfit’s hitchhiker. An EDT agent that’s stuck in the desert will self-modify in such a way that, when someone offers to pick him up, he knows he’ll keep the promise to pay them. If an EDT agent has already been picked up and now has to make the choice to pay the guy (perhaps he became EDT when he had an epiphany during the ride), he won’t pay, because he already knows he got the ride and doesn’t want to waste the money.
An EDT agent that’s stuck in the desert will self-modify in such a way that, when someone offers to pick him up, he knows he’ll keep the promise to pay them.
If the agent is able to credibly commit (I assume that’s what you mean by self modification), he doesn’t have to do that in advance. He can just commit when he’s offered the ride.
On a side note, the entry you linked says:
This is the dilemma of Parfit’s Hitchhiker, and the above is the standard resolution according to mainstream philosophy’s causal decision theory
Is it actually correct that causal decision theory is mainstream? I was under the impression that EDT is mainstream, so much that is usually referred to just as decision theory.
He can in that example. There are others where he can’t. For example, the guy picking him up might have other ways of figuring out if he’d pay, and not explain what’s going on until the ride, when it’s too late to commit.
Is it actually correct that causal decision theory is mainstream?
I don’t know. Both of them are major enough to have Wikipedia articles. I’ve heard that philosophers are split on Newcomb’s paradox, which would separate CDTers and EDTers.
In any case, both decision theories give the same answer for Parfit’s hitchhiker.
Okay, that’s what I thought. So this has nothing to do with time, in the sense of what happens first, but rather the agent’s current state of knowledge, in the sense of what it already knows about. Thanks for clarifying. I’m just not convinced that this is a bug, rather than a feature, for an agent that can make arbitrary precommitments.
and A is the action of deciding to not smoke for the purpose of avoiding cancer.
People aren’t that good and understanding why they do things. It might seem like you decided not to smoke because of EDT, but the more you want to smoke, the less likely you are to follow that line of reasoning.
This is being used as a proxy for the presence of the gene in question; an easy way around our lack of introspection is to use another proxy: testing for the presence of the gene.
If this were an option, it wouldn’t change the problem. An EDT agent that would quit smoking for the good news value, without knowing whether it had the gene, would either avoid getting tested, or precommit to stop smoking regardless of the test results. It would do this for the same reason that, in Newcomb’s problem, it wouldn’t want to know whether the opaque box was empty before making its decision, even if it could.
People aren’t that good and understanding why they do things. It might seem like you decided not to smoke because of EDT, but the more you want to smoke, the less likely you are to follow that line of reasoning.
It wouldn’t. It would make a blanket precommitment for all such problems that have not already started. It would treat all problems currently in motion differently. Essentially, the ability to modify itself means that it can pick every future choice right now, but if “right now” is half way through the problem, that’s not going to matter a whole lot.
What do you mean by “already started”? An EDT agent doesn’t really care about time, because it doesn’t care about causation. So it will act to control events that have already happened chronologically, as long as it doesn’t know how they turned out.
This is a great point. But it’s an issue for us imperfect humans more than for a true EDT agent. This is why my default is to avoid activities associated with higher mortality, until I have good reason to think that, for whatever reason, my participation in the activity wouldn’t be inauspicious, after accounting for other things that I know.
Take Parfit’s hitchhiker. An EDT agent that’s stuck in the desert will self-modify in such a way that, when someone offers to pick him up, he knows he’ll keep the promise to pay them. If an EDT agent has already been picked up and now has to make the choice to pay the guy (perhaps he became EDT when he had an epiphany during the ride), he won’t pay, because he already knows he got the ride and doesn’t want to waste the money.
If the agent is able to credibly commit (I assume that’s what you mean by self modification), he doesn’t have to do that in advance. He can just commit when he’s offered the ride.
On a side note, the entry you linked says:
Is it actually correct that causal decision theory is mainstream? I was under the impression that EDT is mainstream, so much that is usually referred to just as decision theory.
He can in that example. There are others where he can’t. For example, the guy picking him up might have other ways of figuring out if he’d pay, and not explain what’s going on until the ride, when it’s too late to commit.
I don’t know. Both of them are major enough to have Wikipedia articles. I’ve heard that philosophers are split on Newcomb’s paradox, which would separate CDTers and EDTers.
In any case, both decision theories give the same answer for Parfit’s hitchhiker.
Okay, that’s what I thought. So this has nothing to do with time, in the sense of what happens first, but rather the agent’s current state of knowledge, in the sense of what it already knows about. Thanks for clarifying. I’m just not convinced that this is a bug, rather than a feature, for an agent that can make arbitrary precommitments.
This is being used as a proxy for the presence of the gene in question; an easy way around our lack of introspection is to use another proxy: testing for the presence of the gene.
If this were an option, it wouldn’t change the problem. An EDT agent that would quit smoking for the good news value, without knowing whether it had the gene, would either avoid getting tested, or precommit to stop smoking regardless of the test results. It would do this for the same reason that, in Newcomb’s problem, it wouldn’t want to know whether the opaque box was empty before making its decision, even if it could.
It’s used as a proxy for a lot of things. This isn’t about one specific situation. It’s about a class of problems.