I get the feeling, reading this, that you are using the word “impossible” in an unusual way. Is this the case? That is, is “impossible” a term of art in decision theory discussions, with a meaning different than its ordinary one?
If not, then I confess that can’t make sense of much of what you say…
By “impossible” I mean not happening in actuality (which might be an ensemble, in which case I’m not counting what happens with particularly low probabilities), taking into account the policy that the agent actually follows. So the agent may have no way of knowing if something is impossible (and often won’t before actually making a decision). This actuality might take place outside the thought experiment, for example in Transparent Newcomb that directly presents you with two full boxes (that is, both boxes being full is part of the description of the thought experiment), and where you decide to take both, the thought experiment is describing an impossible situation (in case you do decide to take both boxes), while the actuality has the big box empty.
So for the problem where you-as-money-maximizer choose between receiving $10 and $5, and actually have chosen $10, I would say that taking $5 is impossible, which might be an unusual sense of the word (possibly misleading before making the decision; 5-and-10 problem is about what happens if you take this impossibility too seriously in an unhelpful way). This is the perspective of an external Oracle that knows everything and doesn’t make public predictions.
If this doesn’t clear up the issue, could you cite a small snippet that you can’t make sense of and characterize the difficulty? Focusing on Transparent-Newcomb-with-two-full-boxes might help (with respect to use of “impossible”, not considerations on how to solve it), it’s way cleaner than Bomb.
(The general difficulty might be from the sense in which UDT is a paradigm, its preferred ways of framing its natural problems are liable to be rounded off to noise when seen differently. But I don’t know what the difficulty is on object level in any particular case, so calling this a “paradigm” is more of a hypothesis about the nature of the difficulty that’s not directly helpful.)
By “impossible” I mean not happening in actuality (which might be an ensemble, in which case I’m not counting what happens with particularly low probabilities)
Sorry, do you mean that you don’t count low-probability events as impossible, or that you don’t count them as possible (a.k.a. “happening in actuality”)?
So the agent may have no way of knowing if something is impossible (and often won’t before actually making a decision).
This is an example of a statement that seems nonsensical to me. If I am an agent, and something is happening to me, that seems to me to be real by definition. (As Eliezerput it: “Whatever is, is real.”) And anything that is real, must (again, by definition) be possible…
If what is happening to me is actually happening in a simulation… well, so what? The whole universe could be a simulation, right? How does that change anything?
So the idea of “this thing that is happening to you right now is actually impossible” seems to me to be incoherent.
I… have considerable difficult parsing what you’re saying in the second paragraph of your comment. (I followed the link, and a couple more from it, and was not enlightened, unfortunately.)
If I am an agent, and something is happening to me
The point is that you don’t know that something is happening to you just because you are seeing it happen. Seeing it happen is what takes place when you-as-an-algorithm is evaluated on the corresponding observations. A response to seeing it happen is well-defined even if the algorithm is never actually evaluated on those observations. When we spell out what happens inside the algorithm, what we see is that the algorithm is “seeing it happen”. This is so even if we don’t actually look. (See also.)
So for example, if I’m asking what would be your reaction to the sky turning green, what is the status of you-in-the-question who sees the sky turn green? They see it happen in the same way that you see it not happen. Yet from the fact that they see it happen, it doesn’t follow that it actually happens (the sky is not actually green).
Another point is that for you-in-the-question, it might be the green-sky world that matters, not the blue-sky world. That is a side effect of how your insertion into the green-sky world doesn’t respect the semantics of your preferences, which care about blue-sky world. For you-in-the-question with preferences ending up changed to care about the green-sky world, the useful sense of actuality refers to the green-sky world, so that for them it’s the blue-sky world that’s impossible. But if agents share preferences, this kind of thing doesn’t happen. (This is another paragraph that doesn’t respect rabbit hole safety regulations.)
If what is happening to me is actually happening in a simulation… well, so what?
You typically don’t know that some observation is taking place even in a simulation, yet your response to that observation that never happens in any form, and is not predicted by any predictor, is still well-defined. It makes sense to ask what it is.
Sorry, do you mean that you don’t count low-probability events as impossible, or that you don’t count them as possible (a.k.a. “happening in actuality”)?
I mean that if something does happen in actuality-as-ensemble with very low probability, that doesn’t disqualify it from being impossible according to how I’m using the word. Without this caveat literally nothing would be impossible in some settings.
I… have considerable difficult parsing what you’re saying in the second paragraph of your comment.
The link is not helpful here, it’s more about what goes wrong when my sense of “impossible” is taken too far, for reasons that have nothing to do with word choice (it perhaps motivates this word choice a little bit). The use of that paragraph is in what’s outside the parenthetical. It’s intended to convey that when you choose between options A and B, it’s usually said that both taking A and taking B is possible, while my use of “impossible” in this thread is such that the option that’s not actually taken is instead impossible.
So for example, if I’m asking what would be your reaction to the sky turning green, what is the status of you-in-the-question who sees the sky turn green? They see it happen in the same way that you see it not happen. Yet from the fact that they see it happen, it doesn’t follow that it actually happens (the sky is not actually green).
If the sky were to turn green, I would certainly behave as if it had indeed turned green; I would not say “this is impossible and isn’t happening”. So I am not sure what this gets us, as far as explaining anything…
Another point is that for you-in-the-question, it might be the green-sky world that matters, not the blue-sky world. That is a side effect of how your insertion into the green-sky world doesn’t respect the semantics of your preferences, which care about blue-sky world. For you-in-the-question with preferences ending up changed to care about the green-sky world, the useful sense of actuality refers to the green-sky world, so that for them it’s the blue-sky world that’s impossible. But if agents share preferences, this kind of thing doesn’t happen. (This is another paragraph that doesn’t respect rabbit hole safety regulations.)
My preferences “factor out” the world I find myself in, as far as I can tell. By “agents share preferences” are you suggesting a scenario where, if the sky were to turn green, I would immediately stop caring about anything whatsoever that happened in that world, because my preferences were somehow defined to be “about” the world where the sky were still blue? This seems pathological. I don’t think it makes any sense to say that I “care about the blue-sky world”; I care about what happens in whatever world I am actually in, and the sky changing color wouldn’t affect that.
The point is that you don’t know that something is happening to you just because you are seeing it happen.
Well, if something’s not actually happening, then I’m not actually seeing it happen. I don’t think your first paragraph makes sense, sorry.
You typically don’t know that some observation is taking place even in a simulation, yet your response to that observation that never happens in any form, and is not predicted by any predictor, is still well-defined. It makes sense to ask what it is.
Does it? I’m not sure that it does, actually… if something never happens, and I never observe it, then I never respond to it, either. My response to it is nothing.
You can ask: “but if it did happen, what would be your response?”—and that’s a reasonable question. But any answer to that question would indeed have to take as given that the event in question were in fact actually happening (otherwise the question is meaningless).
I mean that if something does happen in actuality-as-ensemble with very low probability, that doesn’t disqualify it from being impossible according to how I’m using the word. Without this caveat literally nothing would be impossible in some settings.
Well… that is a very unusual use of “impossible”, yes. Might I suggest using a different word? You seem to be saying: “yes, certain things that can happen are impossible”, which is very much counter to all ordinary usage. I think using a word in this way can only lead to confusion…
(The last paragraph of your comment doesn’t elucidate much, but perhaps that is because of the aforesaid odd word usage.)
Well, if something’s not actually happening, then I’m not actually seeing it happen.
Not actually, you seeing it happen isn’t real, but this unreality of seeing it happen proceeds in a specific way. It’s not indeterminate greyness, and not arbitrary.
if something never happens, and I never observe it, then I never respond to it, either. My response to it is nothing.
If your response (that never happens) could be 0 or 1, it couldn’t be nothing. If it’s 0 (despite never having been observed to be 0), the claim that it’s 1 is false, and the claim that it’s nothing doesn’t type check.
I’m guessing that the analogy between you and an algorithm doesn’t hold strongly in your thinking about this, it’s the use of “you” in place of “algorithm” that does a lot of work in these judgements that wouldn’t happen for talking about an “algorithm”. So let’s talk about algorithms to establish common ground.
Let’s say we have a pure total procedure f written in some programming language, with the signature f : O → D, where O = Texts is the type of observations and D = {0,1} is the type of decisions. Let’s say that in all plausible histories of the world, f is never evaluated on argument “green sky”. In this case I would say that it’s impossible for the argument (observation) to be “green sky”, procedure f is never evaluated with this argument in actuality.
Yet it so happens that f(“green sky”) is 0. It’s not 1 and not nothing. There could be processes sensitive to this fact that don’t specifically evaluate f on this argument. And there are facts about what happens inside f with intermediate variables or states of some abstract machine that does the evaluation (procedure f’s experience of observing the argument and formulating a response to it), as it’s evaluated on this never-encountered argument, and these facts are never observed in actuality, yet they are well-defined by specifying f and the abstract machine.
You can ask: “but if it did happen, what would be your response?”—and that’s a reasonable question. But any answer to that question would indeed have to take as given that the event in question were in fact actually happening (otherwise the question is meaningless).
The question of what f(“green sky”) would evaluate to isn’t meaningless regardless of whether evaluation of f on the argument “green sky” is an event that in fact actually happens. Actually extant evidence for a particular answer, such as a proof that the answer is 0, is arguably also evidence of the evaluation having taken place. But reasoning about the answer doesn’t necessarily pin it down exactly, in which case the evaluation didn’t necessarily take place.
For example, perhaps we only know that f(“green sky”) is the same as g(“blue sky”), but don’t know what the values are. Actually proving this equality doesn’t in general require either f(“green sky”) or g(“blue sky”) to be actually evaluated.
You seem to be saying: “yes, certain things that can happen are impossible”, which is very much counter to all ordinary usage.
Winning a billion dollars on the stock market by following the guidance of a random number generator technically “can happen”, but I feel it’s a central example of something impossible in ordinary usage of the word. I also wouldn’t say that it can happen, without the scare quotes, even though technically it can.
I would not say “this is impossible and isn’t happening”.
This is mostly relevant for decisions between influencing one world and influencing another, possible when there are predictors looking from one world into the other. I don’t think behavior within-world (in ordinary situations) should significantly change depending on its share of reality, but also I don’t see a problem with noticing that the share of reality of some worlds is much smaller than for some other worlds. Another use is manipulating a predictor that imagines you seeing things that you (but not the predictor) know can’t happen, and won’t notice you noticing.
Well, if something’s not actually happening, then I’m not actually seeing it happen.
Not actually, you seeing it happen isn’t real, but this unreality of seeing it happen proceeds in a specific way. It’s not indeterminate greyness, and not arbitrary.
What do you mean, “proceeds in a specific way”? It doesn’t proceed at all. Because it’s not happening, and isn’t real.
if something never happens, and I never observe it, then I never respond to it, either. My response to it is nothing.
If your response (that never happens) could be 0 or 1, it couldn’t be nothing. If it’s 0 (despite never having been observed to be 0), the claim that it’s 1 is false, and the claim that it’s nothing doesn’t type check.
This seems wrong to me. If my response never happens, then it’s nothing; it’s the claim that it’s 1 that doesn’t type check, as does the claim that it’s 0. It can’t be either 1 or 0, because it doesn’t happen.
(In algorithm terms, if you like: what is the return value of a function that is never called? Nothing, because it’s never called and thus never returns anything. Will that function return 0? No. Will it return 1? Also no.)
Let’s say we have a pure total procedure f written in some programming language, with the signature f : O → D, where O = Texts is the type of observations and D = {0,1} is the type of decisions. Let’s say that in all plausible histories of the world, f is never evaluated on argument “green sky”. In this case I would say that it’s impossible for the argument (observation) to be “green sky”, procedure f is never evaluated with this argument in actuality.
There could be processes sensitive to this fact that don’t specifically evaluate f on this argument.
Please elaborate!
The question of what f(“green sky”) would evaluate to isn’t meaningless regardless of whether evaluation of f on the argument “green sky” is an event that in fact actually happens.
Indeed, but the question of what f(“green sky”) actually returns, certainly is meaningless if f(“green sky”) is never evaluated.
Actually extant evidence for a particular answer, such as a proof that the answer is 0, is arguably also evidence of the evaluation having taken place. But reasoning about the answer doesn’t necessarily pin it down exactly, in which case the evaluation didn’t necessarily take place.
For example, perhaps we only know that f(“green sky”) is the same as g(“blue sky”), but don’t know what the values are. Actually proving this equality doesn’t in general require either f(“green sky”) or g(“blue sky”) to be actually evaluated.
I’m afraid I don’t see what this has to do with anything…
Winning a billion dollars on the stock market by following the guidance of a random number generator technically “can happen”, but I feel it’s a central example of something impossible in ordinary usage of the word. I also wouldn’t say that it can happen, without the scare quotes, even though technically it can.
I strongly disagree that this matches ordinary usage!
… predictors looking from one world into the other …
I am not sure what you mean by this? (Or by the rest of your last paragraph, for that matter…)
Sorry, do you mean that you don’t count low-probability events as impossible, or that you don’t count them as possible (a.k.a. “happening in actuality”)?
I get the feeling, reading this, that you are using the word “impossible” in an unusual way. Is this the case? That is, is “impossible” a term of art in decision theory discussions, with a meaning different than its ordinary one?
If not, then I confess that can’t make sense of much of what you say…
By “impossible” I mean not happening in actuality (which might be an ensemble, in which case I’m not counting what happens with particularly low probabilities), taking into account the policy that the agent actually follows. So the agent may have no way of knowing if something is impossible (and often won’t before actually making a decision). This actuality might take place outside the thought experiment, for example in Transparent Newcomb that directly presents you with two full boxes (that is, both boxes being full is part of the description of the thought experiment), and where you decide to take both, the thought experiment is describing an impossible situation (in case you do decide to take both boxes), while the actuality has the big box empty.
So for the problem where you-as-money-maximizer choose between receiving $10 and $5, and actually have chosen $10, I would say that taking $5 is impossible, which might be an unusual sense of the word (possibly misleading before making the decision; 5-and-10 problem is about what happens if you take this impossibility too seriously in an unhelpful way). This is the perspective of an external Oracle that knows everything and doesn’t make public predictions.
If this doesn’t clear up the issue, could you cite a small snippet that you can’t make sense of and characterize the difficulty? Focusing on Transparent-Newcomb-with-two-full-boxes might help (with respect to use of “impossible”, not considerations on how to solve it), it’s way cleaner than Bomb.
(The general difficulty might be from the sense in which UDT is a paradigm, its preferred ways of framing its natural problems are liable to be rounded off to noise when seen differently. But I don’t know what the difficulty is on object level in any particular case, so calling this a “paradigm” is more of a hypothesis about the nature of the difficulty that’s not directly helpful.)
Sorry, do you mean that you don’t count low-probability events as impossible, or that you don’t count them as possible (a.k.a. “happening in actuality”)?
This is an example of a statement that seems nonsensical to me. If I am an agent, and something is happening to me, that seems to me to be real by definition. (As Eliezer put it: “Whatever is, is real.”) And anything that is real, must (again, by definition) be possible…
If what is happening to me is actually happening in a simulation… well, so what? The whole universe could be a simulation, right? How does that change anything?
So the idea of “this thing that is happening to you right now is actually impossible” seems to me to be incoherent.
I… have considerable difficult parsing what you’re saying in the second paragraph of your comment. (I followed the link, and a couple more from it, and was not enlightened, unfortunately.)
The point is that you don’t know that something is happening to you just because you are seeing it happen. Seeing it happen is what takes place when you-as-an-algorithm is evaluated on the corresponding observations. A response to seeing it happen is well-defined even if the algorithm is never actually evaluated on those observations. When we spell out what happens inside the algorithm, what we see is that the algorithm is “seeing it happen”. This is so even if we don’t actually look. (See also.)
So for example, if I’m asking what would be your reaction to the sky turning green, what is the status of you-in-the-question who sees the sky turn green? They see it happen in the same way that you see it not happen. Yet from the fact that they see it happen, it doesn’t follow that it actually happens (the sky is not actually green).
Another point is that for you-in-the-question, it might be the green-sky world that matters, not the blue-sky world. That is a side effect of how your insertion into the green-sky world doesn’t respect the semantics of your preferences, which care about blue-sky world. For you-in-the-question with preferences ending up changed to care about the green-sky world, the useful sense of actuality refers to the green-sky world, so that for them it’s the blue-sky world that’s impossible. But if agents share preferences, this kind of thing doesn’t happen. (This is another paragraph that doesn’t respect rabbit hole safety regulations.)
You typically don’t know that some observation is taking place even in a simulation, yet your response to that observation that never happens in any form, and is not predicted by any predictor, is still well-defined. It makes sense to ask what it is.
I mean that if something does happen in actuality-as-ensemble with very low probability, that doesn’t disqualify it from being impossible according to how I’m using the word. Without this caveat literally nothing would be impossible in some settings.
The link is not helpful here, it’s more about what goes wrong when my sense of “impossible” is taken too far, for reasons that have nothing to do with word choice (it perhaps motivates this word choice a little bit). The use of that paragraph is in what’s outside the parenthetical. It’s intended to convey that when you choose between options A and B, it’s usually said that both taking A and taking B is possible, while my use of “impossible” in this thread is such that the option that’s not actually taken is instead impossible.
If the sky were to turn green, I would certainly behave as if it had indeed turned green; I would not say “this is impossible and isn’t happening”. So I am not sure what this gets us, as far as explaining anything…
My preferences “factor out” the world I find myself in, as far as I can tell. By “agents share preferences” are you suggesting a scenario where, if the sky were to turn green, I would immediately stop caring about anything whatsoever that happened in that world, because my preferences were somehow defined to be “about” the world where the sky were still blue? This seems pathological. I don’t think it makes any sense to say that I “care about the blue-sky world”; I care about what happens in whatever world I am actually in, and the sky changing color wouldn’t affect that.
Well, if something’s not actually happening, then I’m not actually seeing it happen. I don’t think your first paragraph makes sense, sorry.
Does it? I’m not sure that it does, actually… if something never happens, and I never observe it, then I never respond to it, either. My response to it is nothing.
You can ask: “but if it did happen, what would be your response?”—and that’s a reasonable question. But any answer to that question would indeed have to take as given that the event in question were in fact actually happening (otherwise the question is meaningless).
Well… that is a very unusual use of “impossible”, yes. Might I suggest using a different word? You seem to be saying: “yes, certain things that can happen are impossible”, which is very much counter to all ordinary usage. I think using a word in this way can only lead to confusion…
(The last paragraph of your comment doesn’t elucidate much, but perhaps that is because of the aforesaid odd word usage.)
Not actually, you seeing it happen isn’t real, but this unreality of seeing it happen proceeds in a specific way. It’s not indeterminate greyness, and not arbitrary.
If your response (that never happens) could be 0 or 1, it couldn’t be nothing. If it’s 0 (despite never having been observed to be 0), the claim that it’s 1 is false, and the claim that it’s nothing doesn’t type check.
I’m guessing that the analogy between you and an algorithm doesn’t hold strongly in your thinking about this, it’s the use of “you” in place of “algorithm” that does a lot of work in these judgements that wouldn’t happen for talking about an “algorithm”. So let’s talk about algorithms to establish common ground.
Let’s say we have a pure total procedure f written in some programming language, with the signature f : O → D, where O = Texts is the type of observations and D = {0,1} is the type of decisions. Let’s say that in all plausible histories of the world, f is never evaluated on argument “green sky”. In this case I would say that it’s impossible for the argument (observation) to be “green sky”, procedure f is never evaluated with this argument in actuality.
Yet it so happens that f(“green sky”) is 0. It’s not 1 and not nothing. There could be processes sensitive to this fact that don’t specifically evaluate f on this argument. And there are facts about what happens inside f with intermediate variables or states of some abstract machine that does the evaluation (procedure f’s experience of observing the argument and formulating a response to it), as it’s evaluated on this never-encountered argument, and these facts are never observed in actuality, yet they are well-defined by specifying f and the abstract machine.
The question of what f(“green sky”) would evaluate to isn’t meaningless regardless of whether evaluation of f on the argument “green sky” is an event that in fact actually happens. Actually extant evidence for a particular answer, such as a proof that the answer is 0, is arguably also evidence of the evaluation having taken place. But reasoning about the answer doesn’t necessarily pin it down exactly, in which case the evaluation didn’t necessarily take place.
For example, perhaps we only know that f(“green sky”) is the same as g(“blue sky”), but don’t know what the values are. Actually proving this equality doesn’t in general require either f(“green sky”) or g(“blue sky”) to be actually evaluated.
Winning a billion dollars on the stock market by following the guidance of a random number generator technically “can happen”, but I feel it’s a central example of something impossible in ordinary usage of the word. I also wouldn’t say that it can happen, without the scare quotes, even though technically it can.
This is mostly relevant for decisions between influencing one world and influencing another, possible when there are predictors looking from one world into the other. I don’t think behavior within-world (in ordinary situations) should significantly change depending on its share of reality, but also I don’t see a problem with noticing that the share of reality of some worlds is much smaller than for some other worlds. Another use is manipulating a predictor that imagines you seeing things that you (but not the predictor) know can’t happen, and won’t notice you noticing.
What do you mean, “proceeds in a specific way”? It doesn’t proceed at all. Because it’s not happening, and isn’t real.
This seems wrong to me. If my response never happens, then it’s nothing; it’s the claim that it’s 1 that doesn’t type check, as does the claim that it’s 0. It can’t be either 1 or 0, because it doesn’t happen.
(In algorithm terms, if you like: what is the return value of a function that is never called? Nothing, because it’s never called and thus never returns anything. Will that function return 0? No. Will it return 1? Also no.)
(Reference for readers who may not be familiar with the relevant terminology, as I was not: Pure Functions and Total Functions.)
Please elaborate!
Indeed, but the question of what f(“green sky”) actually returns, certainly is meaningless if f(“green sky”) is never evaluated.
I’m afraid I don’t see what this has to do with anything…
I strongly disagree that this matches ordinary usage!
I am not sure what you mean by this? (Or by the rest of your last paragraph, for that matter…)
Thats pretty non-standard.
I think you need to answer that.