In my opinion, it’s no use to engage in high-powered epistemic-rationality techniques like “optimize your opponent’s argument for him” if the process of arriving at the truth makes you worse off than the benefits of the truth. Clearly, if the process is emotionally damaging to you, and the payoff is expected to be comparatively low, it’s not rational to engage the problem at all.
The way I see it, our brains go through a lot of trouble to make us believe we’re important and our values matter. We are also the dominant species on the planet, so I’d hold out for a very good payoff before I’d start questioning that.
Another angle is: what difference in sensory experience do you anticipate if we are completely irrelevant on a grand scale? None? Then clearly that part of your map isn’t for arriving at predictions about reality, and updating it will not make you more effective anyways.
...I don’t think “don’t engage the problem at all” is really a viable option. Once you’ve taken the red pill, you can’t really go back up the rabbit hole, right?
My original problem immediately made me think, “Okay, this conclusion is totally bumming me out, but I’m pretty sure it’s coming from an incomplete application of logic”. So I went with that and more-or-less solved it. I could do with having my solution more succinctly expressed, in a snappy, catchy sentence or two, but it seems to work. What I’m asking here is, has anybody else had to solve this problem, and how did they do it?
what difference in sensory experience do you anticipate if we are completely irrelevant on a grand scale? None?
...What? We already know that we’re completely “irrelevant” on any scale, in the sense that there is no universal utility function hardwired into the laws of physics. Discriminating between oughts and is-es is pretty basic.
The question is not whether our human utility functions are universally “true”. We already know they aren’t, because they don’t have an external truth value.
...I don’t think “don’t engage the problem at all” is really a viable option. Once you’ve taken the red pill, you can’t really go back up the rabbit hole, right?
You don’t have to fall down it and smash out your brains at the bottom either.
The question is, are our values internally consistent? How do you prove that they don’t eat themselves from the inside out, or prove that such a problem doesn’t even make sense?
This is a basilisk that appears in many forms. For children, it’s “How do you know there isn’t a monster under the bed?” For horror readers, it’s “How do you know that everyday life is anything more than a terrifyingly fragile veneer over unspeakable horrors that would instantly drive us mad if we so much as suspected their existence?” For theists, “How do you know God in His omnibenevolence passing human understanding doesn’t torture every sentient being after death for all eternity?” For AGI researchers, “How do you know that a truly Friendly AI wouldn’t in Its omnibenevolence passing human understanding reanimate every sentient being and torture them for all eternity?” For utilitarians, “How can we calculate utility, when we are responsible for the entire future lightcone not only of ourselves but of every being sufficiently like us anywhere in the universe?” For philosophers, “How can we ever know anything?” “Does anything really exist?”
It’s all down to how to deal with not having an answer. The fact is, there is no ultimate foundation for anything: you will always have questions that you currently have no answer to, because it is easier to question an answer than to answer a question (as the parents of any small child know). Terror about what the unknown answers might be doesn’t help. I prefer to just look directly at the problem instead of haring off after solutions I know I can’t find, and then Ignore it.
…I don’t think “don’t engage the problem at all” is really a viable option. Once you’ve taken the red pill, you can’t really go back up the rabbit hole, right?
You don’t have to fall down it and smash out your brains at the bottom either.
I… don’t think that metaphor actually connects to any choices you can actually make.
Don’t get me wrong, I’m not against ignoring things. There are so many things you could pay attention to and the vast majority simply aren’t worth it.
But when you find yourself afraid, or uneasy, or upset, I don’t think you should ignore it. I don’t think you really can.
There’s got to be some thought or belief that’s disturbing you (usually, caveats, blah blah), and you’ve got to track it down and nail it to the ground. Maybe it’s a real external problem you’ve got to solve, maybe it’s a flawed belief you’ve got to convincingly disprove to yourself, or at least honestly convince yourself that the problem belongs to the huge class of things that aren’t worth worrying about.
But if that’s the correct solution to a problem, just convincing yourself it aint worth worrying about, you’ve still got to arrive at that conclusion as an actual belief, by actually thinking about it. You can’t just decide to believe it, cuz that’d just be belief in belief, and that doesn’t really seem to work, emotionally, even for people who haven’t generalized the concept of belief in belief.
Anyway, the more I’ve had to practice articulating what was bothering me (that about our/my values logically auto-cannibalizing themselves), the more I’ve come to actually believe that it’s not worth worrying about. (It no longer feels particularly likely to really mean much, and then even if it did why would it apply to me?)
So when you said:
I prefer to just [A] look directly at the problem instead of [B] haring off after solutions I know I can’t find, and then [C] Ignore it.
Yeah, that’s pretty much exactly what I was doing when I wrote this post. Just that in order to effectively reach [C:Ignore], you’ve got to properly do [A:look at problem] first, and [B: try looking for solutions] is part of that.
In my opinion, it’s no use to engage in high-powered epistemic-rationality techniques like “optimize your opponent’s argument for him” if the process of arriving at the truth makes you worse off than the benefits of the truth. Clearly, if the process is emotionally damaging to you, and the payoff is expected to be comparatively low, it’s not rational to engage the problem at all.
The way I see it, our brains go through a lot of trouble to make us believe we’re important and our values matter. We are also the dominant species on the planet, so I’d hold out for a very good payoff before I’d start questioning that.
Another angle is: what difference in sensory experience do you anticipate if we are completely irrelevant on a grand scale? None? Then clearly that part of your map isn’t for arriving at predictions about reality, and updating it will not make you more effective anyways.
...I don’t think “don’t engage the problem at all” is really a viable option. Once you’ve taken the red pill, you can’t really go back up the rabbit hole, right?
My original problem immediately made me think, “Okay, this conclusion is totally bumming me out, but I’m pretty sure it’s coming from an incomplete application of logic”. So I went with that and more-or-less solved it. I could do with having my solution more succinctly expressed, in a snappy, catchy sentence or two, but it seems to work. What I’m asking here is, has anybody else had to solve this problem, and how did they do it?
...What? We already know that we’re completely “irrelevant” on any scale, in the sense that there is no universal utility function hardwired into the laws of physics. Discriminating between oughts and is-es is pretty basic.
The question is not whether our human utility functions are universally “true”. We already know they aren’t, because they don’t have an external truth value.
The question is, are our values internally consistent? How do you prove that they don’t eat themselves from the inside out, or prove that such a problem doesn’t even make sense?
You don’t have to fall down it and smash out your brains at the bottom either.
This is a basilisk that appears in many forms. For children, it’s “How do you know there isn’t a monster under the bed?” For horror readers, it’s “How do you know that everyday life is anything more than a terrifyingly fragile veneer over unspeakable horrors that would instantly drive us mad if we so much as suspected their existence?” For theists, “How do you know God in His omnibenevolence passing human understanding doesn’t torture every sentient being after death for all eternity?” For AGI researchers, “How do you know that a truly Friendly AI wouldn’t in Its omnibenevolence passing human understanding reanimate every sentient being and torture them for all eternity?” For utilitarians, “How can we calculate utility, when we are responsible for the entire future lightcone not only of ourselves but of every being sufficiently like us anywhere in the universe?” For philosophers, “How can we ever know anything?” “Does anything really exist?”
It’s all down to how to deal with not having an answer. The fact is, there is no ultimate foundation for anything: you will always have questions that you currently have no answer to, because it is easier to question an answer than to answer a question (as the parents of any small child know). Terror about what the unknown answers might be doesn’t help. I prefer to just look directly at the problem instead of haring off after solutions I know I can’t find, and then Ignore it.
I… don’t think that metaphor actually connects to any choices you can actually make.
Don’t get me wrong, I’m not against ignoring things. There are so many things you could pay attention to and the vast majority simply aren’t worth it.
But when you find yourself afraid, or uneasy, or upset, I don’t think you should ignore it. I don’t think you really can.
There’s got to be some thought or belief that’s disturbing you (usually, caveats, blah blah), and you’ve got to track it down and nail it to the ground. Maybe it’s a real external problem you’ve got to solve, maybe it’s a flawed belief you’ve got to convincingly disprove to yourself, or at least honestly convince yourself that the problem belongs to the huge class of things that aren’t worth worrying about.
But if that’s the correct solution to a problem, just convincing yourself it aint worth worrying about, you’ve still got to arrive at that conclusion as an actual belief, by actually thinking about it. You can’t just decide to believe it, cuz that’d just be belief in belief, and that doesn’t really seem to work, emotionally, even for people who haven’t generalized the concept of belief in belief.
Anyway, the more I’ve had to practice articulating what was bothering me (that about our/my values logically auto-cannibalizing themselves), the more I’ve come to actually believe that it’s not worth worrying about. (It no longer feels particularly likely to really mean much, and then even if it did why would it apply to me?)
So when you said:
Yeah, that’s pretty much exactly what I was doing when I wrote this post. Just that in order to effectively reach [C:Ignore], you’ve got to properly do [A:look at problem] first, and [B: try looking for solutions] is part of that.