I mean something different by “out of my range” than “out of my direct sensory experiences”. I mean more that they’re such that I can never have experiences from which I can deduce the existence of those other objects out of my sensory experiences. I do believe that objects outside of my sensory experience exist, but I don’t believe that objects which are so far away I can never interact with them either directly or indirectly exist. If they do exist, they don’t do so in any meaningful way.
Your beloved daughter tells you she wants to leave on a ship to found a colony outside your future cone. I claim that there is a meaningful difference between “She’ll keep existing, so it’s sad I’ll never see her again but I’m proud she’s following a cool dream” and “She’ll pop out of existence, why does my daughter want to die?”.
Nope. The outcome is functionally the same to me either way. I can’t tell the difference between whether she died on the verge of the cone, or if she made it out and lived forever, or if she made it out and died two days later. Therefore the difference is meaningless. People are only meaningful to me insofar as I can interact with them (directly or indirectly), otherwise they’re just abstract ideas with no predictive power.
Suppose that your daughter is leaving tomorrow and scheduled to cross your horizon next week. (It’s a fast ship.) I offer you a hundred dollars if you inject her with a poison that will remain inactive for a month, then kill her instantly and painlessly.
If your daughter was going to stay around, you would refuse: you prefer your daughter’s life to a hundred dollars. If Omega had promised to kill your daughter next week, you would accept: it will never affect your daughter, and you like having a hundred dollars.
You would refuse either way if you enjoy the fact that your daughter is alive, because she would not cease to exist to her own senses, only yours. Only if your sole reason for preferring her to be alive is to interact with you personally, then you would accept the offer, also assuming that you do not fear social repercussions for your decision.
I had an interesting conversation with my own daughter when she was 9 about being put into a virtual reality machine and whether it mattered if her friends outside the machine died as long as they were real and alive to her senses in the machine. She said she would be fine with that only if the machine could rewrite her pre-machine memories to erase any guilt.
What I’m saying is that although I enjoy the fact that my daughter is alive, that’s because my intuitions are broken. It’s more that I enjoy the idea of my daughter being alive than it is that I actually think she is alive. Poisoning her would make it harder for me to imagine her as alive, which is why my intuitions say I shouldn’t poison her. But I think those intuitions don’t correspond to reality.
The fact that you prefer this is not evidence for your claims about reality.
As a matter of fact it is. It just isn’t evidence anywhere near as strong those that have already been talked to death at various times on this site, most notably here.
In general, as all of reality is entangled with itself, it’s pretty rare for something to be precisely zero evidence for any hypothesis—it would indicate that P(H|E)=P(H|~E) which is by itself a very specific point in probability space—it’s much more common for something to be so weak evidence that for all practical purposes a person wouldn’t know how to evaluate impact and/or would be wasting time to attempt to do so...
E.g. “The baby I’m thinking of was born on a rainy day. Is this evidence for or against the baby being named ‘Alex’ ?”.
A really intelligent and informed agent would be able to correlate the average raininess of various regions in the world with the likelihood of the child being named Alex in said region, and thus the raininess would be evidence for the baby’s name...
In that case, the fact that I believe something different than wedrifid is evidence against his point.
Wedrifid’s preferences aren’t literally zero percent evidence for his view of the universe, but they’re probably less than one in a googol. My overall point is pretty clear. The nitpicking is annoying, and it seems to me like it’s being done because people don’t want to change their beliefs.
Your argument doesn’t really make sense. You say that although agents might not know how to evaluate the probability of an action’s occurrence, the evidence is still unlikely to be perfectly balanced. But probabilities are subjective, and so the fact that you don’t know how to evaluate a piece of evidence indicates that for all functional purposes the evidence is perfectly balanced.
Generally agreeing with your point—it is nitpicking. Still it may be good practice to remember to say and think “that’s not usable evidence” instead of “that’s not evidence”.
It’s ridiculously weak. Even mentioning it is an inefficient use of time and cognitive resources.
To elaborate a little bit more, humans believe all sorts of stuff that is dumb. I don’t have any particular reason to believe you’re invulnerable to this. I think that your arguments about the implied invisible don’t pay rent, and they complicate the way that I define things as real or unreal, so they’re not worth believing in. Your attempt to use emotionally laden thought experiments instead of real arguments about epistemology furthers my skepticism about your beliefs. The fact that you insist on using your own preferences as evidence for their justification makes your credibility go even lower, because that evidence is so obviously weak that the only reason you would mention it is if you’re extremely biased. Mentioning it is a waste of time; I get no predictive power out of the belief in my daughter who I will never see again.
You might as well argue for the weak existence of an afterlife by saying that you prefer the idea that your daughter still exists somewhere. I suppose that this should make me update my probability that there’s an afterlife by about one in a googol, but it’s also pretty clear that you’re privileging a hypothesis that doesn’t deserve it.
I think that your arguments about the implied invisible don’t pay rent
Eliezers, not mine. And they may not “Pay Rent” but they do, evidently, stand between you and the murdering of your siblings. They also constitute a strictly simpler model for reality than the one you advocate.
Your attempt to use emotionally laden thought experiments
Again, not my attempt, it was multiple other people who were all patiently trying to explain the concepts in a way you might understand.
instead of real arguments about epistemology furthers my skepticism about your beliefs.
The arguments were real. They are what you rejected in the previous sentence. This may or may not mean you actually read them.
The fact that you insist on using your own preferences as evidence for their justification makes your credibility go even lower, because that evidence is so obviously weak that the only reason you would mention it is if you’re extremely biased.
I wasn’t, and I included careful disclaimers to that effect in both comments. I was merely making an incidental technical correction regarding misuse of the word ‘evidence’. You made a point of separating your “intuitive judgement” from your abstract far-mode ideals. When people do this it isn’t always the case that the abstract idealized reasoning is the correct part. I often find that people’s intuitions have better judgement—and that is what I see occurring here. Your intuitions were correct and also happen to be the side of you that is safer to be around without risk of being murdered.
Mentioning it is a waste of time; I get no predictive power out of the belief in my daughter who I will never see again.
Fortunately, current engineering technology is such that your particular brand existence-denial does not pose an imminent threat. As has been mentioned, if we reached the stage where we were capable of significant interstellar travel this kind of thing does start to matter. If there were still people who believed that things magically disappeared once they were sufficiently far enough away from that person then such individuals would need to be restrained by force or otherwise prevented from taking actions that they sincerely believe would not be murder—in the same way that any other murder attempt is prevented if possible.
Eliezers, not mine. And they may not “Pay Rent” but they do, evidently, stand between you and the murdering of your siblings. They also constitute a strictly simpler model for reality than the one you advocate.
Stop begging the question / appealing to emotion. Also, if you’re bringing up Eliezers arguments, then defend them yourself. Don’t hide behind his authority and pretend that you’re not responsible for the words you speak.
This model requires me to waste cognitive space on things that are the same whether or not they’re true. I don’t understand why you believe that it is any simpler to assert that things beyond the Cosmological Horizon exist than it is to assert that they do not. I think the best answer is to say that the concept of existence itself is only worthwhile in some cases; my approach is more pragmatic than yours.
Why should I care whether or not my daughter is dead or alive if I can’t experience her either way? She is just as abstract either way. I don’t understand why you keep thinking that this sort of argument is a knock-down argument.
Again, not my attempt, it was multiple other people who were all patiently trying to explain the concepts in a way you might understand.
I understand the concepts, but I disagree with them. People should stop bringing them up unless they present sold epistemic arguments for their belief. Examples about the daughter just make things more confusing. And you’ve clearly been saying that my intuitions are wrong, don’t try to back out of that now.
The arguments were real. They are what you rejected in the previous sentence. This may or may not mean you actually read them.
Thought experiments are not arguments. They beg the question and muddle the issue with unjustified intuitions.
I wasn’t, and I included careful disclaimers to that effect in both comments. I was merely making an incidental technical correction regarding misuse of the word ‘evidence’. You made a point of separating your “intuitive judgement” from your abstract far-mode ideals. When people do this it isn’t always the case that the abstract idealized reasoning is the correct part. I often find that people’s intuitions have better judgement—and that is what I see occurring here. Your intuitions were correct and also happen to be the side of you that is safer to be around without risk of being murdered.
In other words, 1. you nitpicked instead of making a substantive point 2. you asserted that my intuitions were incorrect.
Fortunately, current engineering technology is such that your particular brand existence-denial does not pose an imminent threat. As has been mentioned, if we reached the stage where we were capable of significant interstellar travel this kind of thing does start to matter. If there were still people who believed that things magically disappeared once they were sufficiently far enough away from that person then such individuals would need to be restrained by force or otherwise prevented from taking actions that they sincerely believe would not be murder—in the same way that any other murder attempt is prevented if possible.
This matters in the same way that the possible existence of an invisible afterlife matters.
I notice I am very confused. Are you some form of solipsist? Do you endorse selfishness? Or believe that “altruism” just means “I care about my own mental state of believing my decisions make others happy”?
Does that apply only to people you can’t interact with in principle, or to everyone you don’t interact with? E.g., if I tell you “Someone on the other side of the planet is being tortured, if you press this button a rescue team will be sent”, do you pay $1 to get access to the button, or do you stare at me blankly and say “I’ll never meet this ‘victim’ or ‘rescue team’, they don’t meaningfully exist.” and go donate to a charity that sends donors pictures of the kids they’ve helped?
I notice I am very confused. Are you some form of solipsist? Do you endorse selfishness? Or believe that “altruism” just means “I care about my own mental state of believing my decisions make others happy”?
I’m a solipsist in the sense that I don’t think it makes sense to believe in things that I can’t see or touch or hear or deduce the existence of from things that I can see or touch or hear. That seems like realism, to me. It involves solipsist elements only insofar as existence isn’t an absolute state but is amenable to probabilities, as a consequence of the fact that probabilities are fundamentally subjective, as is evidence.
I endorse a broad form of selfishness. It makes me happy to make others happy or to see others happy or to know that my actions made others happy. I don’t really care to define altruism right now. I don’t think that will be relevant?
I have a broad definition of interaction in mind here. If they connect to me through a chain of variables, then that is interaction. I think the more direct the chain the more interaction that is and the more valuable the person becomes, this is why I care more for family members than people in other universes who I will never meet. So, I will interact with the victim and rescue team, under my understanding of interaction. And $1 is cheap. So I help them.
You can prefer that state, sure. But that doesn’t mean that it is an accurate reflection of reality. The abstract idea of my daughters existence beyond the light cone is comforting, and would make me happy. But the abstract idea of my daughters existence in heaven is also comforting and would make me happy. I wish it were true that she existed. But I don’t believe things just because they would be nice to believe.
This is what I meant when I said that thought experiments were a bad way to think about these things. You’ve confused values and epistemology as a result of the ludicrously abstract nature of this discussion and the emotionally charged thought experiment that I had thrust upon me.
I am not saying, “You value her continued existence, therefore you should believe in it.” I am rather saying that your values may extend to things you do not (and will not, ever) know about, and therefore it may be necessary to make estimations about likelihoods of things that you do not (and will not, ever) know about. In this case, the epistemological work is being done by an assumption of regularity and a non-privileging of your particular position in the physical laws of the universe, which make it seem more likely that there is not anything special about crossing your light cone as opposed to just moving somewhere else where she will happen to have no communication with you in the future.
I am rather saying that your values may extend to things you do not (and will not, ever) know about, and therefore it may be necessary to make estimations about likelihoods of things that you do not (and will not, ever) know about.
It seems like a waste of time to think about things I can’t ever know about. It seems like make-believe to place objective value on the existence of things that I’ll never be able to experience or interact with. I don’t understand why I should care about things that I will never ever know or experience. My values are broken insofar as they lead me to value abstract concepts in and of themselves, as opposed to physical things that I can interact with.
I’d like to point out that my interpretation means I’ll fight like hell to keep my daughter inside my light-cone, because I don’t want to lose her. Your interpretation means you’ll be content with the idea of your daughters existence in the abstract, and to me that’s no different than belief in an afterlife. I point this out because I think the initial example emphasizes the “downsides” to my position while ignoring any corresponding “upsides” that it might entail.
In this case, the epistemological work is being done by an assumption of regularity and a non-privileging of your particular position in the physical laws of the universe, which make it seem more likely that there is not anything special about crossing your light cone as opposed to just moving somewhere else where she will happen to have no communication with you in the future.
I thought about this. It turns out I’ve been using a modified version of Hume’s problem of induction as the basis for my argument, in the back of my head. When it comes to real life and my future, I’m willing to temporarily discard the problem of induction because doing so brings me rewards. When it comes to things beyond my light cone and my experiences, I’m not, because there is no such reward and never could be.
In other words, I have a heuristic in my head that says paradoxes only go away when it is pragmatic to ignore them, because otherwise you sacrifice mental accuracy. This heuristic means that I’m not willing to discard the problem of induction when it comes to experiences and existences beyond my range of interaction.
Hopefully that makes my position clearer to you. It’s not that I’m privileging my own position; it’s that my position is the only one I have to work from and that I have no idea how things would work outside my light cone.
I mean something different by “out of my range” than “out of my direct sensory experiences”. I mean more that they’re such that I can never have experiences from which I can deduce the existence of those other objects out of my sensory experiences. I do believe that objects outside of my sensory experience exist, but I don’t believe that objects which are so far away I can never interact with them either directly or indirectly exist. If they do exist, they don’t do so in any meaningful way.
Your beloved daughter tells you she wants to leave on a ship to found a colony outside your future cone. I claim that there is a meaningful difference between “She’ll keep existing, so it’s sad I’ll never see her again but I’m proud she’s following a cool dream” and “She’ll pop out of existence, why does my daughter want to die?”.
Nope. The outcome is functionally the same to me either way. I can’t tell the difference between whether she died on the verge of the cone, or if she made it out and lived forever, or if she made it out and died two days later. Therefore the difference is meaningless. People are only meaningful to me insofar as I can interact with them (directly or indirectly), otherwise they’re just abstract ideas with no predictive power.
Suppose that your daughter is leaving tomorrow and scheduled to cross your horizon next week. (It’s a fast ship.) I offer you a hundred dollars if you inject her with a poison that will remain inactive for a month, then kill her instantly and painlessly.
If your daughter was going to stay around, you would refuse: you prefer your daughter’s life to a hundred dollars. If Omega had promised to kill your daughter next week, you would accept: it will never affect your daughter, and you like having a hundred dollars.
What do you do, and why?
You would refuse either way if you enjoy the fact that your daughter is alive, because she would not cease to exist to her own senses, only yours. Only if your sole reason for preferring her to be alive is to interact with you personally, then you would accept the offer, also assuming that you do not fear social repercussions for your decision. I had an interesting conversation with my own daughter when she was 9 about being put into a virtual reality machine and whether it mattered if her friends outside the machine died as long as they were real and alive to her senses in the machine. She said she would be fine with that only if the machine could rewrite her pre-machine memories to erase any guilt.
What I’m saying is that although I enjoy the fact that my daughter is alive, that’s because my intuitions are broken. It’s more that I enjoy the idea of my daughter being alive than it is that I actually think she is alive. Poisoning her would make it harder for me to imagine her as alive, which is why my intuitions say I shouldn’t poison her. But I think those intuitions don’t correspond to reality.
I prefer the you that you consider broken to the ideal that you believe you advocate.
The fact that you prefer this is not evidence for your claims about reality.
As a matter of fact it is. It just isn’t evidence anywhere near as strong those that have already been talked to death at various times on this site, most notably here.
I think we must have different definitions of evidence.
In general, as all of reality is entangled with itself, it’s pretty rare for something to be precisely zero evidence for any hypothesis—it would indicate that P(H|E)=P(H|~E) which is by itself a very specific point in probability space—it’s much more common for something to be so weak evidence that for all practical purposes a person wouldn’t know how to evaluate impact and/or would be wasting time to attempt to do so...
E.g. “The baby I’m thinking of was born on a rainy day. Is this evidence for or against the baby being named ‘Alex’ ?”. A really intelligent and informed agent would be able to correlate the average raininess of various regions in the world with the likelihood of the child being named Alex in said region, and thus the raininess would be evidence for the baby’s name...
In that case, the fact that I believe something different than wedrifid is evidence against his point.
Wedrifid’s preferences aren’t literally zero percent evidence for his view of the universe, but they’re probably less than one in a googol. My overall point is pretty clear. The nitpicking is annoying, and it seems to me like it’s being done because people don’t want to change their beliefs.
Your argument doesn’t really make sense. You say that although agents might not know how to evaluate the probability of an action’s occurrence, the evidence is still unlikely to be perfectly balanced. But probabilities are subjective, and so the fact that you don’t know how to evaluate a piece of evidence indicates that for all functional purposes the evidence is perfectly balanced.
Generally agreeing with your point—it is nitpicking. Still it may be good practice to remember to say and think “that’s not usable evidence” instead of “that’s not evidence”.
Yes, mine is along the lines of “The stuff that would cause correctly implemented Bayesian agents to update”. Yours is one of the more socially defined kinds. You will note that the evidence is described as weak evidence, relative to the post that should resolve your confusion entirely. And weak evidence is what it is, no more and no less.
It’s ridiculously weak. Even mentioning it is an inefficient use of time and cognitive resources.
To elaborate a little bit more, humans believe all sorts of stuff that is dumb. I don’t have any particular reason to believe you’re invulnerable to this. I think that your arguments about the implied invisible don’t pay rent, and they complicate the way that I define things as real or unreal, so they’re not worth believing in. Your attempt to use emotionally laden thought experiments instead of real arguments about epistemology furthers my skepticism about your beliefs. The fact that you insist on using your own preferences as evidence for their justification makes your credibility go even lower, because that evidence is so obviously weak that the only reason you would mention it is if you’re extremely biased. Mentioning it is a waste of time; I get no predictive power out of the belief in my daughter who I will never see again.
You might as well argue for the weak existence of an afterlife by saying that you prefer the idea that your daughter still exists somewhere. I suppose that this should make me update my probability that there’s an afterlife by about one in a googol, but it’s also pretty clear that you’re privileging a hypothesis that doesn’t deserve it.
Eliezers, not mine. And they may not “Pay Rent” but they do, evidently, stand between you and the murdering of your siblings. They also constitute a strictly simpler model for reality than the one you advocate.
Again, not my attempt, it was multiple other people who were all patiently trying to explain the concepts in a way you might understand.
The arguments were real. They are what you rejected in the previous sentence. This may or may not mean you actually read them.
I wasn’t, and I included careful disclaimers to that effect in both comments. I was merely making an incidental technical correction regarding misuse of the word ‘evidence’. You made a point of separating your “intuitive judgement” from your abstract far-mode ideals. When people do this it isn’t always the case that the abstract idealized reasoning is the correct part. I often find that people’s intuitions have better judgement—and that is what I see occurring here. Your intuitions were correct and also happen to be the side of you that is safer to be around without risk of being murdered.
Fortunately, current engineering technology is such that your particular brand existence-denial does not pose an imminent threat. As has been mentioned, if we reached the stage where we were capable of significant interstellar travel this kind of thing does start to matter. If there were still people who believed that things magically disappeared once they were sufficiently far enough away from that person then such individuals would need to be restrained by force or otherwise prevented from taking actions that they sincerely believe would not be murder—in the same way that any other murder attempt is prevented if possible.
Stop begging the question / appealing to emotion. Also, if you’re bringing up Eliezers arguments, then defend them yourself. Don’t hide behind his authority and pretend that you’re not responsible for the words you speak.
This model requires me to waste cognitive space on things that are the same whether or not they’re true. I don’t understand why you believe that it is any simpler to assert that things beyond the Cosmological Horizon exist than it is to assert that they do not. I think the best answer is to say that the concept of existence itself is only worthwhile in some cases; my approach is more pragmatic than yours.
Why should I care whether or not my daughter is dead or alive if I can’t experience her either way? She is just as abstract either way. I don’t understand why you keep thinking that this sort of argument is a knock-down argument.
I understand the concepts, but I disagree with them. People should stop bringing them up unless they present sold epistemic arguments for their belief. Examples about the daughter just make things more confusing. And you’ve clearly been saying that my intuitions are wrong, don’t try to back out of that now.
Thought experiments are not arguments. They beg the question and muddle the issue with unjustified intuitions.
In other words, 1. you nitpicked instead of making a substantive point 2. you asserted that my intuitions were incorrect.
This matters in the same way that the possible existence of an invisible afterlife matters.
I find this style of argumentation disingenuous and decline to engage with you further on this subject.
I notice I am very confused. Are you some form of solipsist? Do you endorse selfishness? Or believe that “altruism” just means “I care about my own mental state of believing my decisions make others happy”?
Does that apply only to people you can’t interact with in principle, or to everyone you don’t interact with? E.g., if I tell you “Someone on the other side of the planet is being tortured, if you press this button a rescue team will be sent”, do you pay $1 to get access to the button, or do you stare at me blankly and say “I’ll never meet this ‘victim’ or ‘rescue team’, they don’t meaningfully exist.” and go donate to a charity that sends donors pictures of the kids they’ve helped?
I’m a solipsist in the sense that I don’t think it makes sense to believe in things that I can’t see or touch or hear or deduce the existence of from things that I can see or touch or hear. That seems like realism, to me. It involves solipsist elements only insofar as existence isn’t an absolute state but is amenable to probabilities, as a consequence of the fact that probabilities are fundamentally subjective, as is evidence.
I endorse a broad form of selfishness. It makes me happy to make others happy or to see others happy or to know that my actions made others happy. I don’t really care to define altruism right now. I don’t think that will be relevant?
I have a broad definition of interaction in mind here. If they connect to me through a chain of variables, then that is interaction. I think the more direct the chain the more interaction that is and the more valuable the person becomes, this is why I care more for family members than people in other universes who I will never meet. So, I will interact with the victim and rescue team, under my understanding of interaction. And $1 is cheap. So I help them.
I think this is more of a reason that my natural intuitions are broken than a good argument for your belief.
Values aren’t things which have predictive power. I don’t necessarily have to be able to verify it to prefer one state of the universe over another.
You can prefer that state, sure. But that doesn’t mean that it is an accurate reflection of reality. The abstract idea of my daughters existence beyond the light cone is comforting, and would make me happy. But the abstract idea of my daughters existence in heaven is also comforting and would make me happy. I wish it were true that she existed. But I don’t believe things just because they would be nice to believe.
This is what I meant when I said that thought experiments were a bad way to think about these things. You’ve confused values and epistemology as a result of the ludicrously abstract nature of this discussion and the emotionally charged thought experiment that I had thrust upon me.
I am not saying, “You value her continued existence, therefore you should believe in it.” I am rather saying that your values may extend to things you do not (and will not, ever) know about, and therefore it may be necessary to make estimations about likelihoods of things that you do not (and will not, ever) know about. In this case, the epistemological work is being done by an assumption of regularity and a non-privileging of your particular position in the physical laws of the universe, which make it seem more likely that there is not anything special about crossing your light cone as opposed to just moving somewhere else where she will happen to have no communication with you in the future.
It seems like a waste of time to think about things I can’t ever know about. It seems like make-believe to place objective value on the existence of things that I’ll never be able to experience or interact with. I don’t understand why I should care about things that I will never ever know or experience. My values are broken insofar as they lead me to value abstract concepts in and of themselves, as opposed to physical things that I can interact with.
I’d like to point out that my interpretation means I’ll fight like hell to keep my daughter inside my light-cone, because I don’t want to lose her. Your interpretation means you’ll be content with the idea of your daughters existence in the abstract, and to me that’s no different than belief in an afterlife. I point this out because I think the initial example emphasizes the “downsides” to my position while ignoring any corresponding “upsides” that it might entail.
I thought about this. It turns out I’ve been using a modified version of Hume’s problem of induction as the basis for my argument, in the back of my head. When it comes to real life and my future, I’m willing to temporarily discard the problem of induction because doing so brings me rewards. When it comes to things beyond my light cone and my experiences, I’m not, because there is no such reward and never could be.
In other words, I have a heuristic in my head that says paradoxes only go away when it is pragmatic to ignore them, because otherwise you sacrifice mental accuracy. This heuristic means that I’m not willing to discard the problem of induction when it comes to experiences and existences beyond my range of interaction.
Hopefully that makes my position clearer to you. It’s not that I’m privileging my own position; it’s that my position is the only one I have to work from and that I have no idea how things would work outside my light cone.