My intuition is that chickens are less sentient, and that is sort of like thinking slower. Perhaps a year of a chicken’s life is equivalent to a day of a human’s. A day of a chicken’s life adds less to the numerator than a day of a human’s, but it also adds less to the denominator.
Maybe I’m way off base here, but it seems like average utilitarianism leads to disturbing possibility itself. That being 1 super happy person is considered a superior outcome to 1000000000000 pretty darn happy people. Please explain how, if at all, I’m misinterpreting average utilitarianism.
I think you just have different intuitions than average utilitarians. I have talked to someone who saw no reason why having a higher population is good in of itself.
I am somewhat swayed by an anthropic argument. If you live in the first universe, you’ll be super happy. If you live in the second, you’ll be pretty darn happy. Thus, the first universe is better.
On the other hand, you often need to consider that you’re less likely to live in one universe than in another. For instance, if you could make 10% of population vastly happier by killing the other 90%, you need to factor in the 10% chances of survival.
I don’t buy into that theory of identity. The way the universe works, observer-moments are arranged in lines. There’s no reason this is necessary in principle. It could be a web where minds split and merge, or a bunch of Boltzmann brains that appear and vanish after a nanosecond. You are just a random one of the observer-moments. And you have to be one that actually exist, so there’s a 100% chance of survival.
If you did buy into that theory, that would result in a warped form of average utilitarianism, where you want to maximize the average value of the total utility of a given person.
You are just a random one of the observer-moments.
I don’t think the word “you” is doing any work in that sentence.
Personal identity may not exist as an ontological feature on the low level of physical reality, but it does exist in the high-level of our experience and I think it’s meaningful to talk about identities (lines of observer-moments) which may die (the line ends).
If you did buy into that theory, that would result in a warped form of average utilitarianism, where you want to maximize the average value of the total utility of a given person.
I’m not sure I understand what you mean (I don’t endorse average utilitarianism in any case). Do you mean that I might want to maximize the average of the utilities of my possible time-lines (due to imperfect knowledge), weighted by the probability of those time-lines? Isn’t that just maximizing expected utility?
Personal identity may not exist as an ontological feature on the low level of physical reality, but it does exist in the high-level of our experience and I think it’s meaningful to talk about identities (lines of observer-moments) which may die (the line ends).
I don’t think that’s relevant in this context. You are a random observer. You live.
I suppose if you consider it intrinsically important to be part of a long line of observers, then that matters. But if you just think that you’re not going to have as much total happiness because you don’t live as long then either you’re fundamentally mistaken or the argument I just gave is.
I’m not sure I understand what you mean
If “you” are a random person, and this includes the entire lifespan, then the best universe would be one where the average person has a long and happy life, but adding more people wouldn’t help.
weighted by the probability of those time-lines?
If you’re saying that it’s more likely to be a person who has a longer life, then I guess our “different” views on identity probably are just semantics, and you end up with the form of average utilitarianism I was originally suggesting.
That’s very different from saying “you are a random observer-moment” as you did before.
I suppose if you consider it intrinsically important to be part of a long line of observers, then that matters.
I consider it intrinsically important to have a personal future. If I am now a specific observer—I’ve already observed my present—then I can drastically narrow down my anticipated future observations. I don’t expect to be any future observer existing in the universe (or even near me) with equal probability; I expect to be one of the possible future observers who have me in their observer-line past. This seems necessary to accept induction and to reason at all.
If “you” are a random person, and this includes the entire lifespan, then the best universe would be one where the average person has a long and happy life, but adding more people wouldn’t help.
But in the actual universe, when making decisions that influence the future of the universe, I do not treat myself as a random person; I know which person I am. I know about the Rawlsian veil, but I don’t think we should have decision theories that don’t allow to optimize the utility of observers similar to myself (or belonging to some other class), rather than all observers in the universe. We should be allowed to say that even if the universe is full of paperclippers who outnumber us, we can just decide to ignore their utilities and still have a consistent utilitarian system.
(Also, it would be very hard to define a commensurable ‘utility function’ for all ‘observers’, rather than just for all humans and similar intelligences. And your measure function across observers—does a lizard have as many observer-moments as a human? - may capture this intuition anyway.)
I’m not sure this is in disagreement with you. So I still feel confused about something, but it may just be a misunderstanding of your particular phrasing or something.
If you’re saying that it’s more likely to be a person who has a longer life,
I didn’t intend that. I think I should taboo the verb “to be” in “to be a person”, and instead talk about decision theories which produce optimal behavior—and then in some situations you may reason like that.
That’s very different from saying “you are a random observer-moment” as you did before.
I meant observer-moment. That’s what I think of when I think of the word “observer”, so it’s easy for me to make that mistake.
If I am now a specific observer—I’ve already observed my present—then I can drastically narrow down my anticipated future observations.
If present!you anticipates something, it makes life easy for future!you. It’s useful. I don’t see how it applies to anthropics, though. Yous aren’t in a different reference class than other people. Even if they were, it can’t just be future!yous that are one reference class. That would mean that whether or not two yous are in the same reference class depends on the point of reference. First!you would say they all have the same reference class. Last!you would say he’s his own reference class.
I do not treat myself as a random person; I know which person I am.
I’m not an expert, but I got the impression that UDT/TDT only tells you to treat yourself as a random person from the class of persons implementing the same decision procedure as yourself. That’s far more narrow than the set of all observers.
And it may be the correct reference class to use here. Not future!yous but all timeline!yous—except that when taking a timeful view, you can only influence future!yous in practice.
My intuition is that chickens are less sentient, and that is sort of like thinking slower. Perhaps a year of a chicken’s life is equivalent to a day of a human’s. A day of a chicken’s life adds less to the numerator than a day of a human’s, but it also adds less to the denominator.
Maybe I’m way off base here, but it seems like average utilitarianism leads to disturbing possibility itself. That being 1 super happy person is considered a superior outcome to 1000000000000 pretty darn happy people. Please explain how, if at all, I’m misinterpreting average utilitarianism.
I think you just have different intuitions than average utilitarians. I have talked to someone who saw no reason why having a higher population is good in of itself.
I am somewhat swayed by an anthropic argument. If you live in the first universe, you’ll be super happy. If you live in the second, you’ll be pretty darn happy. Thus, the first universe is better.
On the other hand, you often need to consider that you’re less likely to live in one universe than in another. For instance, if you could make 10% of population vastly happier by killing the other 90%, you need to factor in the 10% chances of survival.
I don’t buy into that theory of identity. The way the universe works, observer-moments are arranged in lines. There’s no reason this is necessary in principle. It could be a web where minds split and merge, or a bunch of Boltzmann brains that appear and vanish after a nanosecond. You are just a random one of the observer-moments. And you have to be one that actually exist, so there’s a 100% chance of survival.
If you did buy into that theory, that would result in a warped form of average utilitarianism, where you want to maximize the average value of the total utility of a given person.
I don’t think the word “you” is doing any work in that sentence.
Personal identity may not exist as an ontological feature on the low level of physical reality, but it does exist in the high-level of our experience and I think it’s meaningful to talk about identities (lines of observer-moments) which may die (the line ends).
I’m not sure I understand what you mean (I don’t endorse average utilitarianism in any case). Do you mean that I might want to maximize the average of the utilities of my possible time-lines (due to imperfect knowledge), weighted by the probability of those time-lines? Isn’t that just maximizing expected utility?
I don’t think that’s relevant in this context. You are a random observer. You live.
I suppose if you consider it intrinsically important to be part of a long line of observers, then that matters. But if you just think that you’re not going to have as much total happiness because you don’t live as long then either you’re fundamentally mistaken or the argument I just gave is.
If “you” are a random person, and this includes the entire lifespan, then the best universe would be one where the average person has a long and happy life, but adding more people wouldn’t help.
If you’re saying that it’s more likely to be a person who has a longer life, then I guess our “different” views on identity probably are just semantics, and you end up with the form of average utilitarianism I was originally suggesting.
That’s very different from saying “you are a random observer-moment” as you did before.
I consider it intrinsically important to have a personal future. If I am now a specific observer—I’ve already observed my present—then I can drastically narrow down my anticipated future observations. I don’t expect to be any future observer existing in the universe (or even near me) with equal probability; I expect to be one of the possible future observers who have me in their observer-line past. This seems necessary to accept induction and to reason at all.
But in the actual universe, when making decisions that influence the future of the universe, I do not treat myself as a random person; I know which person I am. I know about the Rawlsian veil, but I don’t think we should have decision theories that don’t allow to optimize the utility of observers similar to myself (or belonging to some other class), rather than all observers in the universe. We should be allowed to say that even if the universe is full of paperclippers who outnumber us, we can just decide to ignore their utilities and still have a consistent utilitarian system.
(Also, it would be very hard to define a commensurable ‘utility function’ for all ‘observers’, rather than just for all humans and similar intelligences. And your measure function across observers—does a lizard have as many observer-moments as a human? - may capture this intuition anyway.)
I’m not sure this is in disagreement with you. So I still feel confused about something, but it may just be a misunderstanding of your particular phrasing or something.
I didn’t intend that. I think I should taboo the verb “to be” in “to be a person”, and instead talk about decision theories which produce optimal behavior—and then in some situations you may reason like that.
I meant observer-moment. That’s what I think of when I think of the word “observer”, so it’s easy for me to make that mistake.
If present!you anticipates something, it makes life easy for future!you. It’s useful. I don’t see how it applies to anthropics, though. Yous aren’t in a different reference class than other people. Even if they were, it can’t just be future!yous that are one reference class. That would mean that whether or not two yous are in the same reference class depends on the point of reference. First!you would say they all have the same reference class. Last!you would say he’s his own reference class.
I think you do if you use UDT or TDT.
I’m not an expert, but I got the impression that UDT/TDT only tells you to treat yourself as a random person from the class of persons implementing the same decision procedure as yourself. That’s far more narrow than the set of all observers.
And it may be the correct reference class to use here. Not future!yous but all timeline!yous—except that when taking a timeful view, you can only influence future!yous in practice.