In these kinds of problems, I notice that my intuitions give some of (!) the same answers as if I was trying to think about the long- or medium-range future effects of a potential life, and about how much it would cost to get future effects that were as good as the effects that such a life would have had.
Some of the effects of a person’s life are on material wealth produced and consumed. But other than material wealth, the future effects that seem hardest to replace are the effects of a person noticing possibilities or perspectives or conceptualizations or ways of thinking, and advocating those perspectives so that other people know about them afterwards. The reason that these effects are hard to replace is that the way you would replace them would be by imagining a random hypothetical person and imagining the perspectives they would have had. But humans can’t do this if they don’t already have those perspectives; instead they imagine perspectives that are already stated in terms familiar to them.
when you are in nature or contemplating nature, you can be confident that the resulting thoughts and feelings are uncontaminated by all of the (visible and invisible) ideas and biases and assumptions that are present in your particular time and place.
Part of it is explained in a theory promoted by complex-systems theorists such as Stuart Kaufmann: when complex structures (like organisms, or whole computer systems, or perspectives) are in evolutionary competition, new basic structures (like the Baupläne (“body plans”) in evolutionary developmental biology, or computing platforms) are usually only explored at the beginning of the competition (like the Cambrian explosion, or the computer revolution), because later all the competitive niches are filled with highly-adapted versions of the basic structures, and changing the basic structure would make for an uncompetitive total structure. This is true even if a better basic structure is known, which would have outcompeted all the existing structures if it had had the same amount of time to be adapted. The same is probably true for perspectives competing to be able to represent (or at least not contradict) our knowledge as we learn. When we have old perspectives which can represent current knowledge, and we learn something new, the way we update our predictions is by adapting the old perspectives until they can represent the new knowledge. But if other perspectives would have been adapted to represent all the knowledge just as well, we should also mix in the predictions they would have made, to reduce the relative entropy from our predictions to the truth. And if another perspective would have been adapted to represent all the knowledge better, we should use that perspective instead, and give the other perspectives less weight.
Part of what I mean is related to some of the intuitions which support the idea of “openness to the unbidden”, a phrase used by theologian William May and quoted by political philosopher Michael Sandel to express a bioconservative position. (Though, in eyes like mine, those intuitions moresupport ideas like “bidding a particular sampling of the otherwise unbidden, according to a sequential design optimized for the benefits of openness”, or even “quadrature with the otherwise unbidden”.)
Part of what I mean is related to some of the intuitions which support randomization protocols, as in double-blind controlled studies, product testing, or auditing.
Part of what I mean is explained by the analogy: When political opinions might be censored, we don’t trust everyone else to substitute for free speech by imagining the expression of those opinions. When a political official is democratically elected, we don’t trust them to substitute for balloting by imagining when their constituents would elect someone to replace them. When someone is tried, we don’t trust a jury to substitute for a defense attorney (or prosecution attorney) by imagining what the attorney might argue. When a scientist argues for a conclusion from some evidence, we don’t trust the scientist to substitute for criticism by imagining what other scientists would say about it. When someone might die, we don’t trust other people to substitute for his life by imagining the difference he would have tried to make and his reasons for trying to make it. We don’t trust a central economic planner to substitute for the information from a market mechanism by imagining what everyone might need everywhere. In experimental mathematics, we don’t trust people to substitute for computational results by imagining them. So might we not trust people to substitute for the perspectives of potential people by imagining what their perspectives would be.
(As an opposite example, we would trust someone to substitute for the random details of an ideal gas by using another sample of the same gas.)
This argument also helps with the Repugnant Conclusion. In a world where there are many many people who all have lives barely worth living, one expects none of them to rise to a level where they can accomplish much to make a better world with better things in it, compared to a world where there are fewer people and some of them have lives that are more worth living. If “barely worth living” is redefined to include “barely making worthwhile improvements to the future”, then the Repugnant Conclusion might evaporate.
My argument about expensive-to-replace perspectives is probably not a good causal explanation of our intuitions about the value of duplicates, though. Some of the cause is probably from instincts around the problem of distinguishing imagined futures in which one is alive (1+ copy—”that’s me!”) from imagined futures in which one is dead (0 copies).
I think it’s interesting that other arguments in these comments, like Richard Chappell’s about “holism” or Phil Goetz’s about “information-theoretic complexity”, also try to justify valuing duplicates less by doing something to open up the ability to argue about consequences that come after an experience had by the potential person.
Someone else must have already said this, but I also feel like the “goodness” or “badness” that we associate with experiences resembles what you would expect in an approximate reinforcement learning system which put changes in expected utility as near as possible to the actions that might have caused them. The implied utility might be defined over events which happen at times that are different from the times at which there are changes to reinforcement learning system’s “beliefs” about those events. Whether you reflectively care about the implied utility is a subject for metaethics, but this is a reason to expect that we should feel confused about how to locate “intrinsic goodness” within time.
But what would it mean for the value of a life to be the value of its long- or medium-range future effects, if the only way those effects could have value is by changing other lives so that they have valuable future effects? What would be easiest for me in the short run is to bite the bullet and say that the only worlds that matter are worlds with infinite futures, like Richard Hollerith has argued, or worlds that are otherwise mathematically related (e.g. by simulation or logical constraints) to an infinitely detailed context. But others would disagree. One argument they could make is that a renormalization of values using a fixed point of this kind of implied infinite regress, done honestly, doesn’t have to have the nice features it might seem to have (“valuing evolution, knowledge, diversity, and the environment; and condemning (non-selective) destruction and censorship”) during the first few steps.
You comment rarely, but when you do, it seems that you are making too much effort (esp. in the context of how few people will get the gist of what you are saying), elaborating a lot of detail. Now that you’ve taken even more effort on yourself (or reduced the number of people who would be able to read what you’ve written, since you’ve taken it down), I feel guilty. :-(
In these kinds of problems, I notice that my intuitions give some of (!) the same answers as if I was trying to think about the long- or medium-range future effects of a potential life, and about how much it would cost to get future effects that were as good as the effects that such a life would have had.
Some of the effects of a person’s life are on material wealth produced and consumed. But other than material wealth, the future effects that seem hardest to replace are the effects of a person noticing possibilities or perspectives or conceptualizations or ways of thinking, and advocating those perspectives so that other people know about them afterwards. The reason that these effects are hard to replace is that the way you would replace them would be by imagining a random hypothetical person and imagining the perspectives they would have had. But humans can’t do this if they don’t already have those perspectives; instead they imagine perspectives that are already stated in terms familiar to them.
Part of what I mean here was explained in the LW article “The Value of Nature and Old Books”:
Part of it is explained in a theory promoted by complex-systems theorists such as Stuart Kaufmann: when complex structures (like organisms, or whole computer systems, or perspectives) are in evolutionary competition, new basic structures (like the Baupläne (“body plans”) in evolutionary developmental biology, or computing platforms) are usually only explored at the beginning of the competition (like the Cambrian explosion, or the computer revolution), because later all the competitive niches are filled with highly-adapted versions of the basic structures, and changing the basic structure would make for an uncompetitive total structure. This is true even if a better basic structure is known, which would have outcompeted all the existing structures if it had had the same amount of time to be adapted. The same is probably true for perspectives competing to be able to represent (or at least not contradict) our knowledge as we learn. When we have old perspectives which can represent current knowledge, and we learn something new, the way we update our predictions is by adapting the old perspectives until they can represent the new knowledge. But if other perspectives would have been adapted to represent all the knowledge just as well, we should also mix in the predictions they would have made, to reduce the relative entropy from our predictions to the truth. And if another perspective would have been adapted to represent all the knowledge better, we should use that perspective instead, and give the other perspectives less weight.
Part of what I mean is related to some of the intuitions which support the idea of “openness to the unbidden”, a phrase used by theologian William May and quoted by political philosopher Michael Sandel to express a bioconservative position. (Though, in eyes like mine, those intuitions more support ideas like “bidding a particular sampling of the otherwise unbidden, according to a sequential design optimized for the benefits of openness”, or even “quadrature with the otherwise unbidden”.)
Part of what I mean is related to some of the intuitions which support randomization protocols, as in double-blind controlled studies, product testing, or auditing.
Part of what I mean is explained by the analogy: When political opinions might be censored, we don’t trust everyone else to substitute for free speech by imagining the expression of those opinions. When a political official is democratically elected, we don’t trust them to substitute for balloting by imagining when their constituents would elect someone to replace them. When someone is tried, we don’t trust a jury to substitute for a defense attorney (or prosecution attorney) by imagining what the attorney might argue. When a scientist argues for a conclusion from some evidence, we don’t trust the scientist to substitute for criticism by imagining what other scientists would say about it. When someone might die, we don’t trust other people to substitute for his life by imagining the difference he would have tried to make and his reasons for trying to make it. We don’t trust a central economic planner to substitute for the information from a market mechanism by imagining what everyone might need everywhere. In experimental mathematics, we don’t trust people to substitute for computational results by imagining them. So might we not trust people to substitute for the perspectives of potential people by imagining what their perspectives would be.
(As an opposite example, we would trust someone to substitute for the random details of an ideal gas by using another sample of the same gas.)
This argument also helps with the Repugnant Conclusion. In a world where there are many many people who all have lives barely worth living, one expects none of them to rise to a level where they can accomplish much to make a better world with better things in it, compared to a world where there are fewer people and some of them have lives that are more worth living. If “barely worth living” is redefined to include “barely making worthwhile improvements to the future”, then the Repugnant Conclusion might evaporate.
My argument about expensive-to-replace perspectives is probably not a good causal explanation of our intuitions about the value of duplicates, though. Some of the cause is probably from instincts around the problem of distinguishing imagined futures in which one is alive (1+ copy—”that’s me!”) from imagined futures in which one is dead (0 copies).
I think it’s interesting that other arguments in these comments, like Richard Chappell’s about “holism” or Phil Goetz’s about “information-theoretic complexity”, also try to justify valuing duplicates less by doing something to open up the ability to argue about consequences that come after an experience had by the potential person.
Someone else must have already said this, but I also feel like the “goodness” or “badness” that we associate with experiences resembles what you would expect in an approximate reinforcement learning system which put changes in expected utility as near as possible to the actions that might have caused them. The implied utility might be defined over events which happen at times that are different from the times at which there are changes to reinforcement learning system’s “beliefs” about those events. Whether you reflectively care about the implied utility is a subject for metaethics, but this is a reason to expect that we should feel confused about how to locate “intrinsic goodness” within time.
But what would it mean for the value of a life to be the value of its long- or medium-range future effects, if the only way those effects could have value is by changing other lives so that they have valuable future effects? What would be easiest for me in the short run is to bite the bullet and say that the only worlds that matter are worlds with infinite futures, like Richard Hollerith has argued, or worlds that are otherwise mathematically related (e.g. by simulation or logical constraints) to an infinitely detailed context. But others would disagree. One argument they could make is that a renormalization of values using a fixed point of this kind of implied infinite regress, done honestly, doesn’t have to have the nice features it might seem to have (“valuing evolution, knowledge, diversity, and the environment; and condemning (non-selective) destruction and censorship”) during the first few steps.
Unfortunately, this comment is incomprehensible.
Oh. I wasn’t thinking as much about comprehensibility as maybe I should have been. I’ll take it down and try to rewrite it.
You comment rarely, but when you do, it seems that you are making too much effort (esp. in the context of how few people will get the gist of what you are saying), elaborating a lot of detail. Now that you’ve taken even more effort on yourself (or reduced the number of people who would be able to read what you’ve written, since you’ve taken it down), I feel guilty. :-(
I wish Steve had just added a note indicating that he’s rewriting the comment, and didn’t delete it.