I’m not sure if this is the right place to ask this, but does anyone know what point Paul’s trying to make in the following part of this podcast? (Relevant section starts around 1:44:00)
Suppose you have a P probability of the best thing you can do and a one-minus P probably the worst thing you can do, what does P have to be so it’s the difference between that and the barren universe. I think most of my probability is distributed between you would need somewhere between 50% and 99% chance of good things and then put some probability or some credence on views where that number is a quadrillion times larger or something in which case it’s definitely going to dominate. A quadrillion is probably too big a number, but very big numbers. Numbers easily large enough to swamp the actual probabilities involved
[ . . . ]
I think that those arguments are a little bit complicated, how do you get at these? I think to clarify the basic position, the reason that you end up concluding it’s worse is just like conceal your intuition about how bad the worst thing that can happen to a person is vs the best thing or damn, the worst thing seems pretty bad and then the like first-pass responses, sort of have this debunking understanding, or we understand causally how it is that we ended up with this kind of preference with respect to really bad stuff versus really good stuff.
If you look at what happens over evolutionary history. What is the range of things that can happen to an organism and how should an organism be trading off like best possible versus worst possible outcomes. Then you end up into well, to what extent is that a debunking explanation that explains why humans in terms of their capacity to experience joy and suffering are unbiased but the reality is still biased versus to what extent is this then fundamentally reflected in our preferences about good and bad things. I think it’s just a really hard set of questions. I could easily imagine maybe shifting on them with much more deliberation.
It seems like an important topic but I’m a bit confused by what he’s saying here. Is the perspective he’s discussing (and puts non-negligible probability on) one that states that the worst possible suffering is a bajillion times worse than the best possible pleasure, and wouldn’t that suggest every human’s life is net-negative (even if your credence on this being the case is ~.1%)? Or is this just discussing the energy-efficiency of ‘hedonium’ and ‘dolorium’, in which case it’s of solely altruistic concern & can be dealt with by strictly limiting compute?
Also, I’m not really sure if this set of views is more “a broken bone/waterboarding is a million times as morally pressing as making a happy person”, or along the more empirical lines of “most suffering (e.g. waterboarding) is extremely light, humans can experience far far far far far^99 times worse; and pleasure doesn’t scale to the same degree.” Even a tiny chance of the second one being true is awful to contemplate.
Really negative events can affect people’s lives for a long time afterward.
From that model, it’s easier to have utility effects by, say, reducing extreme negative events, than say, making someone who is ‘happy’ a little bit happier. So while the second thing may seem easier to do (cost), the first thing may still be more impactful even if you divide by its cost.
The obvious connection is how things play out within a person’s life. If, say, you break your arm, maybe it’ll be harder to do other things because:
it’s in a cast and you can’t use it while it heals
You’re in pain. Maybe you don’t enjoy things like, like watching a movie, as much, when you’re in a lot of pain.
[Insert argument for wearing a helmet while riding a bike or motorcycle even if it’s mildly inconvenient—because it helps reduce/prevent stuff that’s way more inconvenient.]
and pleasure doesn’t scale to the same degree
It’s easy to scale pain? This just seems like an argument that ‘Becoming slightly happier’ is less pressing morally than ‘reducing the amount of torture* in the world’.
*Might be worth noting that if this is about extreme pain, then this implies ‘improving access to medical care’ can be a very powerful intervention, i.e., effective altruism.
Thanks for the response; I’m still somewhat confused though. The question was to do with the theoretical best/worst things possible, so I’m not entirely sure whether parallels to (relatively) minor pleasures/pains are meaningful here.
Specifically I’m confused about:
Then you end up into well, to what extent is that a debunking explanation that explains why humans in terms of their capacity to experience joy and suffering are unbiased but the reality is still biased
I’m not really sure what’s meant by “the reality” here, nor what’s meant by biased. Is the assertion that humans’ intuitive preferences are driven by the range of possible things that could happen in the ancestral environment & that this isn’t likely to match the maximum possible pleasure vs. suffering ratio in the future? If so, how does this lead one to end up concluding it’s worse (rather than better)? I’m not really sure how these arguments connect in a way that could lead one to conclude that the worst possible suffering is a quadrillion times as bad as the best bliss is good.
so I’m not entirely sure whether parallels to (relatively) minor pleasures/pains are meaningful here.
Ah. I suggested them because I figured that such ‘(relatively) minor’ things are what people have experienced and thus are the obvious source for extrapolating out to theoretical maximum/s.
I don’t know what’s meant by ‘reality’ there. Your guess seems reasonable (and was more transparent than what you quoted).
I’m not sure how to guess the maximum ratio.
I’m not really sure how these arguments connect in a way that could lead one to conclude that the worst possible suffering is a quadrillion times as bad as the best bliss is good.
Likewise. (A quadrillion seems like a lot—I’d need a detailed explanation to get why someone would choose that number.)
I think...it makes sense less as emotion, than as a utility function—but that’s not what is being talked about.
Part of it is...when people are well off do they pursue the greatest pleasure? I think negative extremes prompt a focus on basics. In better conditions, people may pursue more complicated things. Overall, there’s something about focus I guess:
‘I don’t want to die’ versus ‘I’m happy to be alive!’. Which sentiment is stronger? It’s easy to pull that up for a thought experiment, that’s extreme, but, if people don’t have that as a risk in their lives then maybe the second thing, or the absence of the risk doesn’t have as much salience, because the risk isn’t present? (Short version: a) it’s hard to reason about scenarios outside of experience*, b) this might induce asymmetry in estimates or intuition.)
*I have experienced stuff and found ‘wow, that was way more intense than I’d expected’ - for stuff I had never experienced before.
I’m not sure if this is the right place to ask this, but does anyone know what point Paul’s trying to make in the following part of this podcast? (Relevant section starts around 1:44:00)
It seems like an important topic but I’m a bit confused by what he’s saying here. Is the perspective he’s discussing (and puts non-negligible probability on) one that states that the worst possible suffering is a bajillion times worse than the best possible pleasure, and wouldn’t that suggest every human’s life is net-negative (even if your credence on this being the case is ~.1%)? Or is this just discussing the energy-efficiency of ‘hedonium’ and ‘dolorium’, in which case it’s of solely altruistic concern & can be dealt with by strictly limiting compute?
Also, I’m not really sure if this set of views is more “a broken bone/waterboarding is a million times as morally pressing as making a happy person”, or along the more empirical lines of “most suffering (e.g. waterboarding) is extremely light, humans can experience far far far far far^99 times worse; and pleasure doesn’t scale to the same degree.” Even a tiny chance of the second one being true is awful to contemplate.
Here’s a model that might simplify things:
Really negative events can affect people’s lives for a long time afterward.
From that model, it’s easier to have utility effects by, say, reducing extreme negative events, than say, making someone who is ‘happy’ a little bit happier. So while the second thing may seem easier to do (cost), the first thing may still be more impactful even if you divide by its cost.
The obvious connection is how things play out within a person’s life. If, say, you break your arm, maybe it’ll be harder to do other things because:
it’s in a cast and you can’t use it while it heals
You’re in pain. Maybe you don’t enjoy things like, like watching a movie, as much, when you’re in a lot of pain.
[Insert argument for wearing a helmet while riding a bike or motorcycle even if it’s mildly inconvenient—because it helps reduce/prevent stuff that’s way more inconvenient.]
It’s easy to scale pain? This just seems like an argument that ‘Becoming slightly happier’ is less pressing morally than ‘reducing the amount of torture* in the world’.
*Might be worth noting that if this is about extreme pain, then this implies ‘improving access to medical care’ can be a very powerful intervention, i.e., effective altruism.
Thanks for the response; I’m still somewhat confused though. The question was to do with the theoretical best/worst things possible, so I’m not entirely sure whether parallels to (relatively) minor pleasures/pains are meaningful here.
Specifically I’m confused about:
I’m not really sure what’s meant by “the reality” here, nor what’s meant by biased. Is the assertion that humans’ intuitive preferences are driven by the range of possible things that could happen in the ancestral environment & that this isn’t likely to match the maximum possible pleasure vs. suffering ratio in the future? If so, how does this lead one to end up concluding it’s worse (rather than better)? I’m not really sure how these arguments connect in a way that could lead one to conclude that the worst possible suffering is a quadrillion times as bad as the best bliss is good.
Ah. I suggested them because I figured that such ‘(relatively) minor’ things are what people have experienced and thus are the obvious source for extrapolating out to theoretical maximum/s.
I don’t know what’s meant by ‘reality’ there. Your guess seems reasonable (and was more transparent than what you quoted).
I’m not sure how to guess the maximum ratio.
Likewise. (A quadrillion seems like a lot—I’d need a detailed explanation to get why someone would choose that number.)
I think...it makes sense less as emotion, than as a utility function—but that’s not what is being talked about.
Part of it is...when people are well off do they pursue the greatest pleasure? I think negative extremes prompt a focus on basics. In better conditions, people may pursue more complicated things. Overall, there’s something about focus I guess:
‘I don’t want to die’ versus ‘I’m happy to be alive!’. Which sentiment is stronger? It’s easy to pull that up for a thought experiment, that’s extreme, but, if people don’t have that as a risk in their lives then maybe the second thing, or the absence of the risk doesn’t have as much salience, because the risk isn’t present? (Short version: a) it’s hard to reason about scenarios outside of experience*, b) this might induce asymmetry in estimates or intuition.)
*I have experienced stuff and found ‘wow, that was way more intense than I’d expected’ - for stuff I had never experienced before.