At the very least, we should expect the universe not to treat conscious beings like disposable objects, but apparently, that’s “demanding so much of it.”
I don’t understand this perspective. When an airplane crash happens, do you blame the laws of gravity for this? Even if you did, you’d also have to give it points for permitting the existence of life in the first place, no? Then what’s the difference with (morally) blaming evolution for animal suffering? (Notwithstanding the whole animal consciousness debate.)
Evolution is of course by no means nice, but what’s the point of blaming something for cruelty when it couldn’t possibly be any different? (somewhat related LW post: An Alien God)
From a transhumanist perspective, the universe of course has room for improvement by many, many orders of magnitude. But if the universe didn’t come with universal and consistent (and thus amoral) physical laws, there would be no conceivable transhumanist interventions. Nor, in the first place, any transhumanists to contemplate them.
“Evolution is of course, by no means nice, but what’s the point of blaming something for cruelty when it couldn’t possibly be any different?”
That’s the thing; I’m really not convinced about that. I’m sure there could be other universes with different laws of physics where the final result would be much nicer for conscious beings. In this universe, it couldn’t be different, but that’s precisely the thing we are judging here.
It may very well be that there are different universes where conscious beings are having a blast and not being tortured and killed as frequently as in this universe that gave rise to this situation. There is no real proof that says existence should be so painful. It could just be the random bad luck of the draw.
It may very well be that there are different universes where conscious beings are having a blast and not being tortured and killed as frequently as in this universe that gave rise to this situation. There is no real proof that says existence should be so painful. It could just be the random bad luck of the draw.
Well, evolution’s indifference seems pretty fundamental, rather than contingent. So I suspect that any average creator-less universe would involve similar amounts of suffering. To quote a shortened passage from An Alien God:
Why is Nature cruel? You, a human, can look at an Ichneumon wasp, and decide that it’s cruel to eat your prey alive… Or what about old elephants, who die of starvation when their last set of teeth fall out? These elephants aren’t going to reproduce anyway. What would it cost evolution—the evolution of elephants, rather—to ensure that the elephant dies right away, instead of slowly and in agony? What would it cost evolution to anesthetize the elephant, or give it pleasant dreams before it dies? Nothing; that elephant won’t reproduce more or less either way.
If you were talking to a fellow human, trying to resolve a conflict of interest, you would be in a good negotiating position—would have an easy job of persuasion. It would cost so little to anesthetize the prey, to let the elephant die without agony! Oh please, won’t you do it, kindly… um...
There’s no one to argue with...
There’s no advocate for the elephants anywhere in the system.
Humans, who are often deeply concerned for the well-being of animals, can be very persuasive in arguing how various kindnesses wouldn’t harm reproductive fitness at all. Sadly, the evolution of elephants doesn’t use a similar algorithm; it doesn’t select nice genes that can plausibly be argued to help reproductive fitness. Simply: genes that replicate more often become more frequent in the next generation. Like water flowing downhill, and equally benevolent.
To get a benevolent evolution (or an evolution-less mechanism which creates life), you need some already-benevolent entity to steer it. I.e. an intelligent designer, a god, the programmer of a universe simulation, or similar. Barring those, there’s no mechanism to make the system intrinsically benevolent. And even then, since a value like benevolence is too complex to arise by pure accident, those entities must in turn have evolved to develop benevolence (arising via mechanisms like kin selection etc.), and must thus themselves have started out in an amoral universe.
If evolution is indifferent, you would expect a symmetry between suffering and joy, but in our world, it seems to lean towards suffering (The suffering of an animal being eaten vs. the joy of the animal eating it. People suffer from chronic pain but not from chronic pleasure, etc.). I think there are a lot of physics-driven details that make it happen. Due to entropy, most of the things are bad for you, and only a small amount is good, so negative stimuli that signal “beware!” are more frequent than positive stimuli that signal “Come close.” One can imagine a less hostile universe where you still have dangers, but a larger % of things are good. In our universe, most RNG events are negative, but one can imagine a different universe with different laws of physics that won’t work this way. It doesn’t require a benevolent creator or non-evolutionary process.
There are many more states of the world that are bad for an individual than good for that individual, and feeling pleasure in a bad world state tends to lead to death. So no, in an amoral world I’d expect much more suffering than pleasure, because the suffering is more instrumentally useful for survival. I think, given that, you’re last point is just… completely unsupported and incorrect.
The first part of your reply is basically repeating the point I made, but again, the issue is you’re assuming the current laws of physics are the only laws that allow conscious beings without a creator. I disagree that must be the case.
How can my last point be supported? Do you expect me to create a universe with different laws of physics? How do you know it’s incorrect?
I fully agree with you that there are a vast set of possible laws of physics that could create conscious beings without a creator. Not sure where it seemed like I meant the opposite? What I disagree with is the idea that us ending up in an amoral world is “bad luck.” A priori I expect it would require literally-unbelievably good luck to end up in a “good” world without a creator/designer, because almost all possible laws supporting conscious beings will not be like that.
And I agree with your point that evolution leans towards suffering, but I disagree with your assertion that an indifferent process would tend to have a symmetry between the two. I see no underlying reason why there would be such a symmetry, and many why there would not.
As for your last point—sorry I wasn’t precise, but at some level yes, I agree that such a thing is technically possible. It is just superexponentially unlikely to the point that a random being is more likely to be a Boltzmann brain than to actually be in such a universe. How many bits of data does it take to reduce a moral system to math, encode that math into physics, and arrange the entire universe’s mass-energy such that over time good greatly outweighs bad across it’s entire spacetime? What fraction of the 2^(N+1)-1 possible datasets that large or smaller correspond to good universes, as opposed to neutral or bad ones? That’s the level of “unlikely” we’re considering. When I say literally unbelievable, I mean the entire cosmos, let alone a human mind, is incapable of representing a number that small.
You don’t need a moral universe; you just need one where the joy is higher than suffering for conscious beings (“agents”); There are many ways in which it can happen:
Starting from a mostly hostile world but converging quickly towards a benevolent reality created by the agents.
Existing in a world where the distribution of bad vs. good external things that the agent can encounter is similar.
Existing in a hostile world, but in which the winning strategy is leeching into a specific resource (which will grant internal satisfaction once reached)
I’m sure you can think of many other examples. Again, it’s not clear to me intuitively that the existence of these worlds is as improbable as you claim.
I’m curious why or whether it would matter whether a universe starts out with goodness baked into the laws themselves, or becomes better over time through the actions of beings the amoral laws and initial conditions cough up? Our own universe gave itself a trillion trillion stars around which to potentially create life, just in our own Hubble volume, and will continue to exist for many times longer than the few hundred million years since the first life capable of suffering or joy appeared on Earth. If it’s possible for good things in one place/time to outweigh bad things in other places and times (which seems to be a prerequisite for this discussion to be meaningful), and possible in principle for beings like us to make things better, then how can we draw any conclusions on the morality of the whole of spacetime except that we should try our best and reserve judgement?
Because you have a pretty significant data point (That spans millions of years) on Earth, and nothing else is going on (to the best of our knowledge), now the question is, how much weight do you want to give to this data point? Reserving judgment means almost ignoring it. For me, it seems more reasonable to update towards a net-negative universe.
Then I think that’s the crux for me. I’d say the right amount of weight is almost none, for the same reason that I don’t update about the expected sum of someone’s life based on what they do in the first weeks after they’re born. We agree the universe did not come into being with the capacity for aiming itself toward being good. It remains to be seen whether we (or other lifeforms elsewhere) do have enough of that capability to make use of it at large scale, which we didn’t even have the capacity to envision until very, very recently.
Given the trajectory and speed of change on Earth in the past few centuries, I think the next few centuries will provide far more data about our future light cone than the entirety of the past millions of years do.
you would expect a symmetry between suffering and joy, but in our world, it seems to lean towards suffering (The suffering of an animal being eaten vs. the joy of the animal eating it. People suffer from chronic pain but not from chronic pleasure, etc.).
Why do you think that life leans towards suffering? I’m not convinced by the argument that the experience of being eaten as prey is worse than the experience of eating prey; that just illustrates that one specific and short type of experience is asymmetric. I’m aware that, due to effects like negativity bias, individual negative experiences are likely more impactful than positive ones.
However, to make the case that the life of an individual or species leans towards suffering, you’d have to make the case that, on average, the respective integral of lifetime experiences is negative. To make the further case that life in general leans towards suffering, those experience integrals would further have to be weighted by degree of consciousness (or ability to experience joy & suffering, or something).
“I’m not convinced by the argument that the experience of being eaten as prey is worse than the experience of eating prey”
Would you see the experience for yourself of being eaten alive Let’s say even having a dog chewing off your hand as equivalent hedonistically to eating a steak? (Long term damage aside)
I don’t think most people would agree to have both of these experiences, but would rather avoid both, which means the suffering is much worse compared to the pleasure of eating meat.
I agree with the proposed methodology, but I have a strong suspicion that the sum will be negative.
I’m not convinced by the argument that the experience of being eaten as prey is worse than the experience of eating prey; that just illustrates that one specific and short type of experience is asymmetric.
You only quoted part of my sentence, and I think you misunderstood my point as a result. I’m wholly aware that being eaten is worse than eating, I just don’t think it particularly matters.
The key point is whether the median moment is positive, negative, or neutral. That will likely dominate any calculation. Not brief extreme experiences, whether positive or negative.
You’re right about my misunderstanding. Thanks for the clarification.
I don’t think the median moment is the Correct KPI if the distribution has high variance, and I believe this is the case with pain and pleasure experiences. Extreme suffering is so bad that most people will need a lot of “normal” time to compensate for it. I would think that most people will not trade torture to extend their lives in 1:1 and probably not even in 1:10 ratios. (E.g. you get tortured for X time and get your life extended by aX time in return)
I don’t understand this perspective. When an airplane crash happens, do you blame the laws of gravity for this? Even if you did, you’d also have to give it points for permitting the existence of life in the first place, no? Then what’s the difference with (morally) blaming evolution for animal suffering? (Notwithstanding the whole animal consciousness debate.)
Evolution is of course by no means nice, but what’s the point of blaming something for cruelty when it couldn’t possibly be any different? (somewhat related LW post: An Alien God)
From a transhumanist perspective, the universe of course has room for improvement by many, many orders of magnitude. But if the universe didn’t come with universal and consistent (and thus amoral) physical laws, there would be no conceivable transhumanist interventions. Nor, in the first place, any transhumanists to contemplate them.
“Evolution is of course, by no means nice, but what’s the point of blaming something for cruelty when it couldn’t possibly be any different?”
That’s the thing; I’m really not convinced about that. I’m sure there could be other universes with different laws of physics where the final result would be much nicer for conscious beings. In this universe, it couldn’t be different, but that’s precisely the thing we are judging here.
It may very well be that there are different universes where conscious beings are having a blast and not being tortured and killed as frequently as in this universe that gave rise to this situation. There is no real proof that says existence should be so painful. It could just be the random bad luck of the draw.
Well, evolution’s indifference seems pretty fundamental, rather than contingent. So I suspect that any average creator-less universe would involve similar amounts of suffering. To quote a shortened passage from An Alien God:
To get a benevolent evolution (or an evolution-less mechanism which creates life), you need some already-benevolent entity to steer it. I.e. an intelligent designer, a god, the programmer of a universe simulation, or similar. Barring those, there’s no mechanism to make the system intrinsically benevolent. And even then, since a value like benevolence is too complex to arise by pure accident, those entities must in turn have evolved to develop benevolence (arising via mechanisms like kin selection etc.), and must thus themselves have started out in an amoral universe.
If evolution is indifferent, you would expect a symmetry between suffering and joy, but in our world, it seems to lean towards suffering (The suffering of an animal being eaten vs. the joy of the animal eating it. People suffer from chronic pain but not from chronic pleasure, etc.).
I think there are a lot of physics-driven details that make it happen. Due to entropy, most of the things are bad for you, and only a small amount is good, so negative stimuli that signal “beware!” are more frequent than positive stimuli that signal “Come close.”
One can imagine a less hostile universe where you still have dangers, but a larger % of things are good. In our universe, most RNG events are negative, but one can imagine a different universe with different laws of physics that won’t work this way. It doesn’t require a benevolent creator or non-evolutionary process.
There are many more states of the world that are bad for an individual than good for that individual, and feeling pleasure in a bad world state tends to lead to death. So no, in an amoral world I’d expect much more suffering than pleasure, because the suffering is more instrumentally useful for survival. I think, given that, you’re last point is just… completely unsupported and incorrect.
The first part of your reply is basically repeating the point I made, but again, the issue is you’re assuming the current laws of physics are the only laws that allow conscious beings without a creator. I disagree that must be the case.
How can my last point be supported? Do you expect me to create a universe with different laws of physics? How do you know it’s incorrect?
I fully agree with you that there are a vast set of possible laws of physics that could create conscious beings without a creator. Not sure where it seemed like I meant the opposite? What I disagree with is the idea that us ending up in an amoral world is “bad luck.” A priori I expect it would require literally-unbelievably good luck to end up in a “good” world without a creator/designer, because almost all possible laws supporting conscious beings will not be like that.
And I agree with your point that evolution leans towards suffering, but I disagree with your assertion that an indifferent process would tend to have a symmetry between the two. I see no underlying reason why there would be such a symmetry, and many why there would not.
As for your last point—sorry I wasn’t precise, but at some level yes, I agree that such a thing is technically possible. It is just superexponentially unlikely to the point that a random being is more likely to be a Boltzmann brain than to actually be in such a universe. How many bits of data does it take to reduce a moral system to math, encode that math into physics, and arrange the entire universe’s mass-energy such that over time good greatly outweighs bad across it’s entire spacetime? What fraction of the 2^(N+1)-1 possible datasets that large or smaller correspond to good universes, as opposed to neutral or bad ones? That’s the level of “unlikely” we’re considering. When I say literally unbelievable, I mean the entire cosmos, let alone a human mind, is incapable of representing a number that small.
You don’t need a moral universe; you just need one where the joy is higher than suffering for conscious beings (“agents”); There are many ways in which it can happen:
Starting from a mostly hostile world but converging quickly towards a benevolent reality created by the agents.
Existing in a world where the distribution of bad vs. good external things that the agent can encounter is similar.
Existing in a hostile world, but in which the winning strategy is leeching into a specific resource (which will grant internal satisfaction once reached)
I’m sure you can think of many other examples. Again, it’s not clear to me intuitively that the existence of these worlds is as improbable as you claim.
I do think our universe will converge that way, if we make it do so. The future is bigger than the past, and we can be the mechanism for that.
Maybe, and maybe not.
I’m curious why or whether it would matter whether a universe starts out with goodness baked into the laws themselves, or becomes better over time through the actions of beings the amoral laws and initial conditions cough up? Our own universe gave itself a trillion trillion stars around which to potentially create life, just in our own Hubble volume, and will continue to exist for many times longer than the few hundred million years since the first life capable of suffering or joy appeared on Earth. If it’s possible for good things in one place/time to outweigh bad things in other places and times (which seems to be a prerequisite for this discussion to be meaningful), and possible in principle for beings like us to make things better, then how can we draw any conclusions on the morality of the whole of spacetime except that we should try our best and reserve judgement?
Because you have a pretty significant data point (That spans millions of years) on Earth, and nothing else is going on (to the best of our knowledge), now the question is, how much weight do you want to give to this data point? Reserving judgment means almost ignoring it. For me, it seems more reasonable to update towards a net-negative universe.
Then I think that’s the crux for me. I’d say the right amount of weight is almost none, for the same reason that I don’t update about the expected sum of someone’s life based on what they do in the first weeks after they’re born. We agree the universe did not come into being with the capacity for aiming itself toward being good. It remains to be seen whether we (or other lifeforms elsewhere) do have enough of that capability to make use of it at large scale, which we didn’t even have the capacity to envision until very, very recently.
Given the trajectory and speed of change on Earth in the past few centuries, I think the next few centuries will provide far more data about our future light cone than the entirety of the past millions of years do.
Why do you think that life leans towards suffering? I’m not convinced by the argument that the experience of being eaten as prey is worse than the experience of eating prey; that just illustrates that one specific and short type of experience is asymmetric. I’m aware that, due to effects like negativity bias, individual negative experiences are likely more impactful than positive ones.
However, to make the case that the life of an individual or species leans towards suffering, you’d have to make the case that, on average, the respective integral of lifetime experiences is negative. To make the further case that life in general leans towards suffering, those experience integrals would further have to be weighted by degree of consciousness (or ability to experience joy & suffering, or something).
“I’m not convinced by the argument that the experience of being eaten as prey is worse than the experience of eating prey”
Would you see the experience for yourself of being eaten alive Let’s say even having a dog chewing off your hand as equivalent hedonistically to eating a steak? (Long term damage aside)
I don’t think most people would agree to have both of these experiences, but would rather avoid both, which means the suffering is much worse compared to the pleasure of eating meat.
I agree with the proposed methodology, but I have a strong suspicion that the sum will be negative.
You only quoted part of my sentence, and I think you misunderstood my point as a result. I’m wholly aware that being eaten is worse than eating, I just don’t think it particularly matters.
The key point is whether the median moment is positive, negative, or neutral. That will likely dominate any calculation. Not brief extreme experiences, whether positive or negative.
You’re right about my misunderstanding. Thanks for the clarification.
I don’t think the median moment is the Correct KPI if the distribution has high variance, and I believe this is the case with pain and pleasure experiences. Extreme suffering is so bad that most people will need a lot of “normal” time to compensate for it. I would think that most people will not trade torture to extend their lives in 1:1 and probably not even in 1:10 ratios. (E.g. you get tortured for X time and get your life extended by aX time in return)
see for example:
A Happy Life Afterward Doesn’t Make Up for Torture—The Washington Post