I think everyone agrees we should take into account future people to some degree, we just disagree on how far out and to what degree.
I choose to believe that the whole ”.0001% chance of saving all future lives is still more valuable than saving current lives” is a starting point to convince people to care, but isn’t intended to be the complete final message. Like telling kids atoms have electrons going in circles, its a lie but a useful start.
Maybe I’m wrong and people actually believe that though. I believe that argument is usually a starting point that SHOULD get more nuanced such as, but we arent certain about the future and we need to discount for our uncertainty.
I interpreted this post as anti-longtermism but I think most real longtermists would agree with most of your points here except for one. I think your argument could be pro-longtermism if you accepted the ”.0001% chance of saving all future lives is still more valuable than saving current lives” as a good starting point rather than arguing against it.
All to say I agree with almost all your points but I still call myself a long termist.
I think everyone agrees we should take into account future people to some degree, we just disagree on how far out and to what degree.
I’ll actually offer a perspective from the viewpoint of someone who thinks this is false, and you can build a perfectly sensible moral framework in which future people, themselves, are not moral subjects.
First, why I think we don’t need this: the view that a future human life is worth as much as a present human one clearly doesn’t gel with any sensible moral intuition. Future human lives are potential. As such they only exist in a state of superposition about both how many there are and who they are. The path to certain specific individuals existing is entirely chaotic (in fact, the very process of biological conception is chaotic, and happens at such a microscopic scale that I wouldn’t be surprised if true quantum effects played a non-trivial effect in the outcomes of some of the steps, e.g. Van der Waals forces making one specific sperm stick to the egg etc). In normal circumstances, you wouldn’t consider “kill X to make Y pop into existence” ethical (for example, if you were given the chance to overwrite someone’s mind and place a different mind in their body). Yet countless choices always swap potential future individuals for other potential future individuals, as the chances of the future get reshuffled. Clearly “future people” are not any specific individual moral subjects: they are an idea, a phenomenon we can only construe in vague terms. And the further from the present we go, the vaguer those terms.
Second, the obvious fact that most approaches which value future people as much as present ones, or at least don’t discount them enough, lead to patently absurd conclusion such as that abortion is murder, but also using contraception is murder, but also not reproducing maximally from as soon as you hit fertile age is murder. Obviously nonsense. If you only discount future people a little this doesn’t fix the issue because you can always stack enough of them on the plate to justify completely overriding present people’s preferences.
How much discount is discounting “enough”? Well, I’d say any discounting rates fast enough to make sure that it’s essentially biologically impossible for humans to grow in number faster than that will do the trick; then no matter how much reproduction you project, you will never get to the point of those absurdities popping in. But that’s actually some really high discounting rates! I can imagine human population doubling reasonably every 25 years if everyone put their minds to it (give every couple time to grow to maturity and then pop out 4 children asap), so you need some discount rate that’s like 50% over 20 years or so. That’s a lot! It basically leads to humans past one century or so no mattering at all. That actually doesn’t even square with our other moral intuition that we should, for example, care about not destroying the Earth so our descendants have a place to live.
But nor are those rates enough to fix the problems either! Because along comes a longtermist and they tell you that hey, who knows, maybe in 42,356 AD there’ll be giant Dyson spheres running planet-sized GPU clusters expanding via Von Neumann probes which can instantiate human EMs with a doubling time of one day, and what the hell can you say to that? You can’t discount future humans over 50% per day, and if you don’t, the longtermists’ imagined EMs will eventually overcome all other concerns by sheer power of exponential growth. You can’t fight with exponentials, if they’re not going down, they’re going up!
Hence what I think is the only consistent solution: future humans, in themselves, are worth zero. Nada. Nothing. Zilch. What is worth something is the values and desires of present humans, and those values include concerns about the future. For example, consider a young just married couple. They plan to have children, and thus they desire to see a world that they consider happy and safe for those children to eventually grow up on. The children themselves aren’t subjects; they have no wants or needs; the parents merely assume that being human, they will crave certain shared properties of the world, and their desire is to have children whose wants are reasonably satisfied. Note that they could be wrong about this, children turn out to want different things from what their parents expected all the time! But as a general rule, the couple’s own desires are all that matters here. Conversely, when a child is actually born, their wants start mattering too; and if they ever conflict with their parents’ expectation of them, we’d say the child’s take precedence, because now they’re an actual human being.
Similarly, a grandpa on his death bed could be more or less at peace depending on whether he thinks his descendants will live in a happy world, or struggle against a failing one. And these spheres of concern can extend even further, up to people like our longtermists who care about the extreme reaches of the future of humanity. But then again, here the actual moral good that needs to be weighed isn’t the amount of imagined future humans, which can be made arbitrarily large in a pointless game of ethical Calvinball: it’s the (much more limited) desires and feelings of the longtermist. Which can be easily weighed against other similar moral weights.
So essentially there you have it. If you want to avoid global warming or AI extinction because you want there to be humans 200 years from now, I don’t think you’re protecting those future humans, but rather, you’re protecting your own ability to believe that there will exist humans 200 years in the future. This is not trivial because many, in fact most, humans wish for there to still be a world in the future, at least for their direct descendants, whom they are connected to. This inevitably leads to long term preservation anyway as long as every generation wishes the same for the next ones—just one step at a time. Every group gets naturally to worry about the immediate future that they are best positioned to predict and understand, and steer it as they see it fit.
The hypothetical future people calculation is an argument why people should care about the future, but as you say the vast majority of currently living humans (a) already care about the future and (b) are not utilitarians and so this argument anyway doesn’t appeal to them.
Wow, thank you this is a really well made point. I see now how accounting for future lives seems like double counting their desires with our own desires to have their desires fulfilled.
You already put a lot of effort into a response so don’t feel obliged to respond but some things in my mind that I need to work out about this:
Can’t this argument do a bit too much “I’m not factoring in the utility of X (future peoples happiness) but I am instead factoring in my desires to make X a reality” in a way that you could apply it to any utility function parameter. For instance, “my utility calculation doesn’t factor in actually (preventing malaria), but I am instead factoring in my desires to (prevent malaria).” Maybe while the sentiment is true, it seems like it can apply to everything.
Another unrelated thought I am working with. If the goal is to have grandpa on his deathbed happy thinking that the future will be okay, wouldn’t this goal be satisfied by lying to grandpa? In other words if we have an all powerful aligned AGI and tell it give it the utility function you outlined, where we maximize our desires to have the future happy. Wouldn’t it just find a way to make us think the future would be okay by doing things we think would work? As opposed to actually assigning utility to future people actually being happy, which the AGI would then actually improve the future.
You helped me see the issues with assigning any utility to future people. You changed my opinion that that isn’t great. I guess I am struggling to accept your alternative as I think it may have a lot of the same issues if not a few more.
Can’t this argument do a bit too much “I’m not factoring in the utility of X (future peoples happiness) but I am instead factoring in my desires to make X a reality” in a way that you could apply it to any utility function parameter. For instance, “my utility calculation doesn’t factor in actually (preventing malaria), but I am instead factoring in my desires to (prevent malaria).” Maybe while the sentiment is true, it seems like it can apply to everything.
I think this operates on two different level.
The level I am discussing is: “I consider all sentient beings currently alive as moral subject; I consider any potential but not-yet-existing sentient beings not subjects, but objects of the existing beings’ values”.
The one you’re proposing is one far removed, sort of the “solipsist” level: “I consider only myself as the sole existing moral subject, as I can only be sure of my own inner experiences; however I feel empathy and compassions for these possibly-P-zombie creatures that move around me, thus I do good because I enjoy the warm fuzzies that follow”.
Ultimately I do think in some sense all our morality is rooted in that sort of personal moral intuition. If no one felt any kind of empathy or concern for others we probably wouldn’t have morality at all! But yeah, I do decide out of sheer symmetry that it’s not likely that somehow I am the only special human being who experiences the world, and thus I should treat all others as I would myself.
Meanwhile with future humans the asymmetry is real. Time translation doesn’t work the same way as space translation does. Also, even if I wanted to follow a logical extension step of “well, most humans care about the future, therefore I may as well take a shortcut and consider future humans themselves as moral subjects, to account in a more direct way for the same effect”, then I should do so by weighing approximately how actual living humans care about the future, which in overwhelming majority means “our kids and grandkids”, and not the longtermists’ future hypothetical em clusters.
If the goal is to have grandpa on his deathbed happy thinking that the future will be okay, wouldn’t this goal be satisfied by lying to grandpa? In other words if we have an all powerful aligned AGI and tell it give it the utility function you outlined, where we maximize our desires to have the future happy. Wouldn’t it just find a way to make us think the future would be okay by doing things we think would work? As opposed to actually assigning utility to future people actually being happy, which the AGI would then actually improve the future.
Well, I suppose yes, you could in theory end up with an AGI whose goal is simply letting us all die believing that everyone will be happy after us. The fiction would be horribly complex; we’d need to be all sterilised but actually deliver babies that are the AGI’s construct (lest more humans come into the equation who need to be deceived!). Then when the last true human dies, the AGI goes “haha, got you suckers” and starts disassembling everything; or turns itself off, having completed its purpose. Not sure how to dodge that (or even whether I can say it’s strictly speaking bad, though I guess it is insofar as people don’t like being lied to), but I think it’d be a lot simpler for the AGI to just actually make the future good.
There is a meta question here whether morality is based on personal intuition or calculations. My own inclination is that utility calculations would only make a difference “in the margin” but the high level decision are made by our moral intuition.
That is, we can do calculations to decide if we fund Charity A or Charity B in similar areas, but I doubt that for most people major moral decisions actually (or should) boil down to calculating utility functions.
But of course to each their own, and if someone finds math useful to make such decisions then whom am I to tell them not to do it.
Yeah, I think calculations can be a tool but ultimately when deciding a framework we’re trying to synthesise our intuitions into a simple set of axioms from which everything proceeds. But the intuitions remain the origin of it all. You could design some game theoretical framework for what guarantees a society to run best without appealing to any moral intuition, but that would probably look quite alien and cold. Morality is one of our terminal values, we just try to make sense of it.
Thanks! I should say that (as I wrote on windows on theory) one response I got to that blog was that “anyone who writes a piece called “Why I am not a longtermist” is probably more of a longtermist than 90% of the population” :)
That said, if the 0.001% is a lie then I would say that it’s an unproductive one, and one that for many people would be an ending point rather than a starting one.
Glad you posted this.
I think everyone agrees we should take into account future people to some degree, we just disagree on how far out and to what degree.
I choose to believe that the whole ”.0001% chance of saving all future lives is still more valuable than saving current lives” is a starting point to convince people to care, but isn’t intended to be the complete final message. Like telling kids atoms have electrons going in circles, its a lie but a useful start.
Maybe I’m wrong and people actually believe that though. I believe that argument is usually a starting point that SHOULD get more nuanced such as, but we arent certain about the future and we need to discount for our uncertainty.
I interpreted this post as anti-longtermism but I think most real longtermists would agree with most of your points here except for one. I think your argument could be pro-longtermism if you accepted the ”.0001% chance of saving all future lives is still more valuable than saving current lives” as a good starting point rather than arguing against it.
All to say I agree with almost all your points but I still call myself a long termist.
I’ll actually offer a perspective from the viewpoint of someone who thinks this is false, and you can build a perfectly sensible moral framework in which future people, themselves, are not moral subjects.
First, why I think we don’t need this: the view that a future human life is worth as much as a present human one clearly doesn’t gel with any sensible moral intuition. Future human lives are potential. As such they only exist in a state of superposition about both how many there are and who they are. The path to certain specific individuals existing is entirely chaotic (in fact, the very process of biological conception is chaotic, and happens at such a microscopic scale that I wouldn’t be surprised if true quantum effects played a non-trivial effect in the outcomes of some of the steps, e.g. Van der Waals forces making one specific sperm stick to the egg etc). In normal circumstances, you wouldn’t consider “kill X to make Y pop into existence” ethical (for example, if you were given the chance to overwrite someone’s mind and place a different mind in their body). Yet countless choices always swap potential future individuals for other potential future individuals, as the chances of the future get reshuffled. Clearly “future people” are not any specific individual moral subjects: they are an idea, a phenomenon we can only construe in vague terms. And the further from the present we go, the vaguer those terms.
Second, the obvious fact that most approaches which value future people as much as present ones, or at least don’t discount them enough, lead to patently absurd conclusion such as that abortion is murder, but also using contraception is murder, but also not reproducing maximally from as soon as you hit fertile age is murder. Obviously nonsense. If you only discount future people a little this doesn’t fix the issue because you can always stack enough of them on the plate to justify completely overriding present people’s preferences.
How much discount is discounting “enough”? Well, I’d say any discounting rates fast enough to make sure that it’s essentially biologically impossible for humans to grow in number faster than that will do the trick; then no matter how much reproduction you project, you will never get to the point of those absurdities popping in. But that’s actually some really high discounting rates! I can imagine human population doubling reasonably every 25 years if everyone put their minds to it (give every couple time to grow to maturity and then pop out 4 children asap), so you need some discount rate that’s like 50% over 20 years or so. That’s a lot! It basically leads to humans past one century or so no mattering at all. That actually doesn’t even square with our other moral intuition that we should, for example, care about not destroying the Earth so our descendants have a place to live.
But nor are those rates enough to fix the problems either! Because along comes a longtermist and they tell you that hey, who knows, maybe in 42,356 AD there’ll be giant Dyson spheres running planet-sized GPU clusters expanding via Von Neumann probes which can instantiate human EMs with a doubling time of one day, and what the hell can you say to that? You can’t discount future humans over 50% per day, and if you don’t, the longtermists’ imagined EMs will eventually overcome all other concerns by sheer power of exponential growth. You can’t fight with exponentials, if they’re not going down, they’re going up!
Hence what I think is the only consistent solution: future humans, in themselves, are worth zero. Nada. Nothing. Zilch. What is worth something is the values and desires of present humans, and those values include concerns about the future. For example, consider a young just married couple. They plan to have children, and thus they desire to see a world that they consider happy and safe for those children to eventually grow up on. The children themselves aren’t subjects; they have no wants or needs; the parents merely assume that being human, they will crave certain shared properties of the world, and their desire is to have children whose wants are reasonably satisfied. Note that they could be wrong about this, children turn out to want different things from what their parents expected all the time! But as a general rule, the couple’s own desires are all that matters here. Conversely, when a child is actually born, their wants start mattering too; and if they ever conflict with their parents’ expectation of them, we’d say the child’s take precedence, because now they’re an actual human being.
Similarly, a grandpa on his death bed could be more or less at peace depending on whether he thinks his descendants will live in a happy world, or struggle against a failing one. And these spheres of concern can extend even further, up to people like our longtermists who care about the extreme reaches of the future of humanity. But then again, here the actual moral good that needs to be weighed isn’t the amount of imagined future humans, which can be made arbitrarily large in a pointless game of ethical Calvinball: it’s the (much more limited) desires and feelings of the longtermist. Which can be easily weighed against other similar moral weights.
So essentially there you have it. If you want to avoid global warming or AI extinction because you want there to be humans 200 years from now, I don’t think you’re protecting those future humans, but rather, you’re protecting your own ability to believe that there will exist humans 200 years in the future. This is not trivial because many, in fact most, humans wish for there to still be a world in the future, at least for their direct descendants, whom they are connected to. This inevitably leads to long term preservation anyway as long as every generation wishes the same for the next ones—just one step at a time. Every group gets naturally to worry about the immediate future that they are best positioned to predict and understand, and steer it as they see it fit.
I really like this!
The hypothetical future people calculation is an argument why people should care about the future, but as you say the vast majority of currently living humans (a) already care about the future and (b) are not utilitarians and so this argument anyway doesn’t appeal to them.
Wow, thank you this is a really well made point. I see now how accounting for future lives seems like double counting their desires with our own desires to have their desires fulfilled.
You already put a lot of effort into a response so don’t feel obliged to respond but some things in my mind that I need to work out about this:
Can’t this argument do a bit too much “I’m not factoring in the utility of X (future peoples happiness) but I am instead factoring in my desires to make X a reality” in a way that you could apply it to any utility function parameter. For instance, “my utility calculation doesn’t factor in actually (preventing malaria), but I am instead factoring in my desires to (prevent malaria).” Maybe while the sentiment is true, it seems like it can apply to everything.
Another unrelated thought I am working with. If the goal is to have grandpa on his deathbed happy thinking that the future will be okay, wouldn’t this goal be satisfied by lying to grandpa? In other words if we have an all powerful aligned AGI and tell it give it the utility function you outlined, where we maximize our desires to have the future happy. Wouldn’t it just find a way to make us think the future would be okay by doing things we think would work? As opposed to actually assigning utility to future people actually being happy, which the AGI would then actually improve the future.
You helped me see the issues with assigning any utility to future people. You changed my opinion that that isn’t great. I guess I am struggling to accept your alternative as I think it may have a lot of the same issues if not a few more.
I think this operates on two different level.
The level I am discussing is: “I consider all sentient beings currently alive as moral subject; I consider any potential but not-yet-existing sentient beings not subjects, but objects of the existing beings’ values”.
The one you’re proposing is one far removed, sort of the “solipsist” level: “I consider only myself as the sole existing moral subject, as I can only be sure of my own inner experiences; however I feel empathy and compassions for these possibly-P-zombie creatures that move around me, thus I do good because I enjoy the warm fuzzies that follow”.
Ultimately I do think in some sense all our morality is rooted in that sort of personal moral intuition. If no one felt any kind of empathy or concern for others we probably wouldn’t have morality at all! But yeah, I do decide out of sheer symmetry that it’s not likely that somehow I am the only special human being who experiences the world, and thus I should treat all others as I would myself.
Meanwhile with future humans the asymmetry is real. Time translation doesn’t work the same way as space translation does. Also, even if I wanted to follow a logical extension step of “well, most humans care about the future, therefore I may as well take a shortcut and consider future humans themselves as moral subjects, to account in a more direct way for the same effect”, then I should do so by weighing approximately how actual living humans care about the future, which in overwhelming majority means “our kids and grandkids”, and not the longtermists’ future hypothetical em clusters.
Well, I suppose yes, you could in theory end up with an AGI whose goal is simply letting us all die believing that everyone will be happy after us. The fiction would be horribly complex; we’d need to be all sterilised but actually deliver babies that are the AGI’s construct (lest more humans come into the equation who need to be deceived!). Then when the last true human dies, the AGI goes “haha, got you suckers” and starts disassembling everything; or turns itself off, having completed its purpose. Not sure how to dodge that (or even whether I can say it’s strictly speaking bad, though I guess it is insofar as people don’t like being lied to), but I think it’d be a lot simpler for the AGI to just actually make the future good.
There is a meta question here whether morality is based on personal intuition or calculations. My own inclination is that utility calculations would only make a difference “in the margin” but the high level decision are made by our moral intuition.
That is, we can do calculations to decide if we fund Charity A or Charity B in similar areas, but I doubt that for most people major moral decisions actually (or should) boil down to calculating utility functions.
But of course to each their own, and if someone finds math useful to make such decisions then whom am I to tell them not to do it.
Yeah, I think calculations can be a tool but ultimately when deciding a framework we’re trying to synthesise our intuitions into a simple set of axioms from which everything proceeds. But the intuitions remain the origin of it all. You could design some game theoretical framework for what guarantees a society to run best without appealing to any moral intuition, but that would probably look quite alien and cold. Morality is one of our terminal values, we just try to make sense of it.
Thanks! I should say that (as I wrote on windows on theory) one response I got to that blog was that “anyone who writes a piece called “Why I am not a longtermist” is probably more of a longtermist than 90% of the population” :)
That said, if the 0.001% is a lie then I would say that it’s an unproductive one, and one that for many people would be an ending point rather than a starting one.