Is there a standard name for the logical fallacy where you attempt a reductio ad absurdum but fail to notice that you’re deriving the absurdity from more than one assumption?
Good catch. Yes, I was deriving the absurdity from more than one assumption.
Why conclude that it’s the caring about far-away strangers that is crazy, as opposed to the decision algorithm that says you should give in to extortions like this?
Maybe with the right decision algorithm you wouldn’t give in to extortions like this. However, this extortion attempt cost the aliens approximately nothing, so unless correctly inferring our decision algorithm cost them less than approximately nothing, the rational step for the aliens is to try the extortion regardless. Thus having a different decision algorithm probably wouldn’t prevent the extortion attempt.
But then changing your values to not care about simulated torture won’t prevent the extortion attempt either (since the aliens will think there’s a small chance you haven’t actually changed your values and it costs them nothing to try). Unless you already really just don’t care about simulated torture, it seems like you’d want to have a decision algorithm that makes you go to war against such extortionists (and not just ignore them).
This sort of thing is really the motivating example behind Newcomb’s problem.
I’m not seeing the analogy. Can you explain?
The extortion attempt cost the aliens almost nothing, and would have given them a vacant solar system to move into if someone like Fred was in power, so it’s rational for them to make the attempt almost regardless of the odds of succeeding. Nobody is reading anybody else’s mind here, except the idiots who read their own minds and uploaded them to the Internet, and they don’t seem to be making any of the choices.
This case looks most like the ‘transparent boxes’ version of the problem, which I haven’t read much about.
In Newcomb’s problem, Omega offers a larger amount of utility if you will predictably do something that intuitively would give a smaller amount of utility.
In this situation, being less open to blackmail probably gives you less disutility in the long run (fewer instances of people trying to blackmail you) than acceding to the blackmail, even though acceding intuitively gives you less disutility.
The other interesting part of this particular scenario is how to define ‘blackmail’ and differentiate it from, say, someone accidentally doing something that’s harmful to you and asking you to help fix it. We’ve approached that issue, too, but I’m not sure if it’s been given a thorough treatment yet.
But then changing your values to not care about simulated torture won’t prevent the extortion attempt either (since the aliens will think there’s a small chance you haven’t actually changed your values and it costs them nothing to try).
That ‘costs them nothing’ part makes a potentially big difference. That the aliens must pay to make their attempt is what gives your decision leverage. The war that you suggest is another way of ensuring that there is a cost. Even though you may actually lose the war and be exterminated.
(Obviously there are whole other scenarios where becoming a ‘protectorate’ and tithing rather than going to war constitutes a mutually beneficial cooperation. When their BATNA is just to wipe you out but it is slightly better for them to just let you pay them.)
I really don’t care about simulated torture, certainly not enough to prefer war over self-modification if simulated torture becomes an issue. War is very expensive and caring about simulated torture appears to be cost without benefit.
The story is consistent with this. Fred has problems because he cares about simulated torture, and Thud doesn’t care and doesn’t have problems.
Hmm, perhaps we agree that the story has only one source of absurdity now? No big deal either way.
(UDT is still worth my time to understand. I owe you that, and I didn’t get to it yet.)
If Fred cared about the aliens exterminating China, and Thud didn’t care; then if the aliens instead threatened to exterminate China, Fred would again have problems and Thud again wouldn’t have.
A rock doesn’t care about anything, and therefore it has no problems at all.
This topic isn’t really about simulation, it’s about the fact that caring about anything permits you to possibly sacrifice something else for it. Anything that isn’t our highest value may end up traded away, sure.
If Fred cared about the aliens exterminating China, and Thud didn’t care; then if the aliens instead threatened to exterminate China, Fred would again have problems and Thud again wouldn’t have.
You can travel from here to China and back. Therefore, caring about China has at least a potential instrumental consequence on the rest of my life. You can’t travel from here to the aliens’ simulation and back, so caring about what happens there imposes costs on the rest of my life but no benefits. The analogy is not valid.
Now, if the black spheres had decent I/O capabilities and you could outsource human intellectual labor tasks to the simulations, I suppose it would make sense to care about what happens there. People can’t do useful work while they’re being tortured, so that wasn’t the scenario in the story.
You can travel from here to China and back. Therefore, caring about China has at least a potential instrumental consequence on the rest of my life.
That’s the only sane reason you believe can exist for caring about distant people at all? That you can potentially travel to them?
So if you’re a paraplegic , who doesn’t want to travel anywhere, can’t travel anywhere, and know you’ll die in two weeks anyway. You get a choice to push a button or not push it. If you push it you get 1 dollar right now, but 1 billion Chinese people will die horrible deaths in two weeks, after your own death.
Are you saying that the ONLY “sane” choice is to push the button, because you can use the dollar to buy bubblegum or something, while there’ll never be a consequence on you for having a billion Chinese die horrible deaths after your own death?
If so, your definition of sanity isn’t the definition most people have. You’re talking about the concept commonly called “selfishness”, not “sanity”.
If so, your definition of sanity isn’t the definition most people have. You’re talking about the concept commonly called “selfishness”, not “sanity”.
Fine. Explain to me why Fred shouldn’t exterminate his species, or tell me that he should.
The extortion aspect isn’t essential. Fred could have been manipulated by true claims about making simulated people super happy.
ETA: At one point this comment had downvotes but no reply, but when I complained that that wasn’t a rational discussion, someone actually replied. LessWrong is doing what it’s supposed to do. Thanks people for making it and participating in it.
I would give in to the alien demands in that situation, assuming we ‘least convenient possible world’ away all externalities (the aliens might not keep their promise, there might be quadrillions of sentient beings in other species who we could save by stopping these aliens).
The way the story is told makes it easy for us to put ourselves in the shoes of Fred, Thud or anyone else on earth, and hard to put ourselves in the shoes of the simulations, faceless masses with no salient personality traits beyond foolishness. This combination brings out the scope insensitivity in people.
A better way to tell the story would be to spend 1000 times as many words describing the point of view of the simulations as that of the people on earth. I wonder how ridiculous giving in would seem then.
I would give in to the alien demands in that situation...
It’s good to have a variety of opinions on the mass-suicide issue. Thanks for posting.
The way the story is told makes it easy for us to put ourselves in the shoes of Fred, Thud or anyone else on earth, and hard to put ourselves in the shoes of the simulations, faceless masses with no salient personality traits beyond foolishness. This combination brings out the scope insensitivity in people.
IMO scope insensitivity is a good thing. If you don’t have scope insensitivity, then you have an unbounded utility function, and in the event you actually have to compute your utility function, you can’t effectively make decisions because the expected utility of your actions is a divergent sum. See Peter de Blanc’s paper and subsequent discussion here. If your values incorporate the sum of a divergent series, what happens in practice is that your opinion at the moment varies depending on whichever part of the divergent sum you’re paying attention to at the moment. Being vulnerable to Pascal’s Wager, Pascal’s Mugging, and having Fred’s mass-suicide preference in the story are all symptomatic of having unbounded utility functions. Shut Up and Multiply is just wrong, if you start from the assumption that scope insensitivity is bad.
ETA: I should probably have said “Shut Up and Multiply” is just wrong, if one of the things you’re multiplying by is the number of individuals affected.
If you truly believe this then I think we really do have no common ground. Scope insensitivity experiments have found that people will pay more to save one child than to save eight. Effectively, it will make you kill seven children and then make you pay for the privilege. It is, IMO, morally indefensible and probably responsible for a great deal of the pain and suffering going on as we speak.
If you don’t have scope insensitivity, then you have an unbounded utility function, and in the event you actually have to compute your utility function, you can’t effectively make decisions because the expected utility of your actions is undefined.
A central tenet of the theory of rationality, as I understand it, is that decision theory, the computational methods used and even method of updating, are all up for grabs if a more convenient alternative is found, but the utility function is not.
My utility function is what I want. Rationality exists to help us achieve what we want, if it is ‘rational’ for me to change my utility function, then this ‘rationality’ is a spectacular failure and I will abandon it for an ‘irrational alternative’.
What you point out may be a problem, but the solution lies in decision theory or algorithmics, not in paying to kill children.
Scope insensitivity experiments have found that people will pay more to save one child than to save eight.
I agree that those people are confused in some way I do not understand.
A central tenet of the theory of rationality, as I understand it, is that decision theory, the computational methods used and even method of updating, are all up for grabs if a more convenient alternative is found, but the utility function is not.
Agreed. We need some way to reconcile that with the aforementioned result about scope-insensitive people who would pay the wrong price for saving one child verses saving eight.
My utility function is what I want.
True. I think your process for introspecting your utility function is wrong, and I think the procedure for inferring the utility function of the scope insensitive people who were thinking about saving one or eight children was flawed too, in a different way.
Humans can’t really have unbounded utility. The brain structures that represent those preferences have finite size, so they can’t intuit unbounded quantities. I believe you care some finite amount for all the rest of humanity, and the total amount you care asymptotically approaches that limit as the number of people involved increases to infinity. The marginal utility to you of the happiness of the trillionth person is approximately zero. Seriously, what does he add to the universe that the previous 999,999,999 didn’t already give enough of?
What you point out may be a problem, but the solution lies in decision theory or algorithmics, not in paying to kill children.
Straw man argument. You’re the only one who mentioned paying to kill children.
I’m afraid that backing away from the whole “one child over eight” think but standing by the rest of scope sensitivity doesn’t save you from killing. For example, if you value ten million people only twice as much as an a million then you can be persuaded to prefer 20% chance of death for 10 million over certain death for 1 million, which means, on average, condemning 1 million people to death.
Any utility function that does not assign utility to human life in direct proportion to the number of lives at stake is going to kill people in some scenarios.
Humans can’t really have unbounded utility. The brain structures that represent those preferences have finite size, so they can’t intuit unbounded quantities. I believe you care some finite amount for all the rest of humanity, and the total amount you care asymptotically approaches that limit as the number of people involved increases to infinity. The marginal utility to you of the happiness of the trillionth person is approximately zero. Seriously, what does he add to the universe that the previous 999,999,999 didn’t already give enough of?
I go with the ‘revealed preference’ theory of utility myself. I don’t think the human brain actually includes anything that looks like a utility function. Instead, it contains a bunch of pleasure pain drives, a bunch of emotional reactions independent of those drives, and something capable of reflecting on the former two and if necessary overriding them. Put together, under sufficient reflection, these form an agent that may act as if it had a utility function, but there’s no little counter in its brain that’s actually tracking utility.
Thus, the way to deduce things about my utility function is not to scan my brain, but to examine the choices I make and see what they reveal about my preferences. For example, I think that if faced with a choice between saving n people with certainty and a 99.9999% chance of saving 2n people, I would always pick the latter regardless of n (I may be wrong about this, I have never actually faced such a scenario for very large values of n). This proves mathematically that my utility function is unbounded in lives saved.
For example, if you value ten million people only twice as much as an a million then you can be persuaded to prefer 20% chance of death for 10 million over certain death for 1 million, which means, on average, condemning 1 million people to death.
The merit of those alternatives depends on how many people total there are. If there are only 10 million people, I’d much rather have 1 million certain deaths than 20% chance of 10 million deaths, since we can repopulate from 8 million but we can’t repopulate from 0.
Any utility function that does not assign utility to human life in direct proportion to the number of lives at stake is going to kill people in some scenarios.
Even if condemning 1 million to death on the average is wrong when all options involve the possible deaths of large numbers of people, deriving positive utility from condemning random children to death when there’s no dilemma is an entirely different level of wrong. Utility as a function of lives should flatten out but not slope back down, assuming overpopulation isn’t an issue. The analogy isn’t valid. Let’s give up on the killing children example.
Thus, the way to deduce things about my utility function is not to scan my brain, but to examine the choices I make and see what they reveal about my preferences.
Yes! Agreed completely, in cases where doing the experiment is practical. All our scenarios seem to involve killing large numbers of people, so the experiment is not practical. I don’t see any reliable path forward—maybe we’re just stuck with not knowing what people prefer in those situations any time soon.
For example, I think that if faced with a choice between saving n people with certainty and a 99.9999% chance of saving 2n people, I would always pick the latter regardless of n
If 2n is the entire population, in one case we have 0 probability of ending up with 0 people and in the other case we have 0.0001% chance of losing the entire species all at once. So you seem to be more open to mass suicide than I’d like, even when there is no simulation or extortion involved. The other interpretation is that you’re introspecting incorrectly, and I hope that’s the case.
Someone voted your comment down. I don’t know why. I voted your comment up because it’s worth talking about, even though I disagree.
Okay, I guess the long term existence of the species does count as quite a significant externality, so in the case where 2n made up the whole species I probably would (I generally assume, unless stated otherwise, that both populations are negligible proportions of the species as a whole.
However, I don’t think humanity is a priori valuable, and if humanity now consists of 99.9% simulations being tortured then I think we really are better off dead.
Even if condemning 1 million to death on the average is wrong when all options involve the possible deaths of large numbers of people, deriving positive utility from condemning random children to death when there’s no dilemma is an entirely different level of wrong. Utility as a function of lives should flatten out but not slope back down, assuming overpopulation isn’t an issue. The analogy isn’t valid. Let’s give up on the killing children example.
It may be that, in a certain sense, one is more ‘wrong’ than the other. However, both amount to an intentional choice that more humans die, and I would say that if you value human life, they are equally poor choices.
How can 10 million humans not be 10 times as valuable as 1 million humans. How does the value of a person’s life depend on the number of other people in danger?
Yes! Agreed completely, in cases where doing the experiment is practical. All our scenarios seem to involve killing large numbers of people, so the experiment is not practical. I don’t see any reliable path forward—maybe we’re just stuck with not knowing what people prefer in those situations any time soon.
I’m inclined to say that my intuitions are probably fairly good on these sort of hypothetical scenarios, provided that the implications of my choices are quite distant and do not affect me personally (i.e. I would be more sceptical of my intuitions if I was in one of the groups).
How can 10 million humans not be 10 times as valuable as 1 million humans. How does the value of a person’s life depend on the number of other people in danger?
I already answered that. The first few hundred survivors are much more valuable than the rest. Even if survival isn’t an issue, the trillionth human adds much less to what I value about humanity than the 100th human does.
I haven’t seen any argument for total utility being proportional to total number of people other than bald assertions. Do you have anything better than that?
However, I don’t think humanity is a priori valuable, and if humanity now consists of 99.9% simulations being tortured then I think we really are better off dead.
It’s your choice whether you count those simulations as human or not. Be sure to be aware of having the choice, and to take responsibility for the choice you make.
I’m inclined to say that my intuitions are probably fairly good on these sort of hypothetical scenarios, provided that the implications of my choices are quite distant and do not affect me personally
You’re human and you’re saying that humanity is not a priori valuable? What?
I haven’t seen any argument for total utility being proportional to total number of people other than bald assertions. Do you have anything better than that?
I don’t have an absolute binding argument, just some intuitions. Some of these intuitions are:
It feels unfair to value human’s differently based on something as arbitrary as the order in which they are presented to me, or the number of other humans they are standing next to.
It seems probable to me that the humans in the group of 1 trillion would want to be treated equally to the humans in the group of 100.
It does not seem like there is anything different about the individual members of a group of 100 humans and a group of a trillion, either physically or mentally. They all still have the same amount of subjective experience, and I have a strong intuition that subjective experience has something very important to do with the value of human life.
It does not feel to me like I become less valuable when there are more other humans around, and it doesn’t seem like there’s anything special enough about me that this cannot be generalised.
It feels elegant, as a solution. Why should they become less valuable? Why not more valuable? Perhaps oscillating between two different values depending on parity? Perhaps some other even weirder function? A constant function at least has a certain symmetry to it.
These are just intuitions, they are convincing to me, but not to all possible minds. Are any of them convincing to you?
It’s your choice whether you count those simulations as human or not. Be sure to be aware of having the choice, and to take responsibility for the choice you make.
Is it also my choice whether I count black people or women as human?
In a trivial sense it is my choice, in that the laws of rationality do not forbid me from having any set of values I want. In a more realistic sense, it is not my choice, one option is obviously morally repugnant (to me at any rate) and I do not want it, I do not want to want it, I do not want to want to want it and so on ad infinitum (my values are in a state of reflective equilibrium on the question).
You’re human and you’re saying that humanity is not a priori valuable? What?
Humans are valuable. Humanity is valuable because it consists of humans, and has the capacity to create more. There is no explicit term in my utility for ‘humanity’ as distinct from the humans that make it up.
Odd, my intuitions are different. Taking the first example:
It feels unfair to value human’s differently based on something as arbitrary as the order in which they are presented to me, or the number of other humans they are standing next to.
If I’m doing something special nobody else is doing and it needs to be done, then I’d better damn well get it done. If I’m standing next to a bunch of other humans doing the same thing, then I’m free! I can leave and nothing especially important happens. I am much less important to the entire enterprise in that case.
If I’m doing something special nobody else is doing and it needs to be done, then I’d better damn well get it done. If I’m standing next to a bunch of other humans doing the same thing, then I’m free! I can leave and nothing especially important happens. I am much less important to the entire enterprise in that case.
The instrumental value of a human may vary from one human to the next. It doesn’t seem to me like this should always go down though, for instance if you have roughly one doctor per every 200 people in you group then each doctor is roughly as instrumentally valuable whether the total number of people is 1 million or 1 billion.
But this is all besides the point, since I personally assign terminal value to humans, independent of any practical use they have (you can’t value everything only instrumentally, trying to do so leads to an infinite regress). I am also inclined to say that except in edge cases, this terminal value is significantly more important than any instrumental value a human may offer.
Coming back to the original discussion we see the following:
The simulations are doing no harm or good to anyone, so their only value is terminal.
The humans on earth are causing untold pain to huge numbers of sentient beings simply by breathing, and may also be doing other things. They have a terminal value, plus a huge negative instrumental value, plus a variety of other positive and negative instrumental values, which average out at not very much.
Yup, you really are on the pro-mass-suicide side of the issue. Whatever. Be sure to pay attention to the proof about bounded utility and figure out which of the premises you disagree with.
How can 10 million humans not be 10 times as valuable as 1 million humans. How does the value of a person’s life depend on the number of other people in danger?
Heavily for most people—due to scope insensitivity. Saving 1 person makes you a hero. Saving a million people does not produce a million times the effect. Thus the size sensitivity.
I am aware that it happens. I’m just saying that it shouldn’t. I’m making the case that this intuition does not fit in reflective equilibrium with our others, and should be scrapped.
How can 10 million humans not be 10 times as valuable as 1 million humans. How does the value of a person’s life depend on the number of other people in danger?
Heavily for most people—due to scope insensitivity. Saving 1 person makes you a hero. Saving a million people does not produce a million times the effect. Thus the size sensitivity.
I’m not the person who downvoted you, but I suspect the reason was that when you said this:
You can travel from here to China and back. Therefore, caring about China has at least a potential instrumental consequence on the rest of my life. You can’t travel from here to the aliens’ simulation and back, so caring about what happens there imposes costs on the rest of my life but no benefits. …. Now, if the black spheres had decent I/O capabilities and you could outsource human intellectual labor tasks to the simulations, I suppose it would make sense to care about what happens there. People can’t do useful work while they’re being tortured, so that wasn’t the scenario in the story.
You implied that it’s wrong or nonsensical to care about other people’s happiness/absence of suffering as a terminal value. We are “allowed” to have whatever terminal values we want, except perhaps contradictory ones.
Explain to me why Fred shouldn’t exterminate his species, or tell me that he should.
The extortion aspect isn’t essential. Fred could have been manipulated by true claims about making simulated people super happy.
I don’t know what it means for a person to be simulated. I don’t know if the simulated people have consciousness. Are we talking about people whose existence feels as real to themselves as it would to us? This is NOT an assumption I ever make about simulations, but should I consider it so for the sake of the argument?
If their experience doesn’t feel real to themselves, then obviously there isn’t any reason to care about what makes them happy or unhappy, that would be Fred being confused, as he conflates the experience of real people with the fundamentally different simulated people.
If their internal experience is as real as ours, then obviously it wouldn’t be the extermination of Fred’s species, some of his species would survive in the simulations, if in eternal captivity.
He should or shouldn’t exterminate his flesh-and-blood species based on whether his utility function assigns a higher value to a free (and aliive) humanity, than to a trillion of individual sentients being happy.
On my part, I’d choose for a free and alive humanity still. But that’s an issue that depends on what terminal values we each have.
You seem to be collecting some downvotes that should have gone to me. To even things out, I have upvoted three of your comments. Feel free to downvote three of mine.
I fully agree, by the way, on the distinction between the moral relevance of simulated humans, who have no ability to physically influence our world, and the moral relevance of distant people here on earth, who physically influence us daily (though indirectly through a chain of intermediary agents).
Simulated persons do have the ability to influence us informationally, though, even if they are unaware of our existence and don’t recognize their own status as simulations. I’m not sure what moral status I would assign to a simulated novelist—particularly if I liked his work.
ETA: To Normal_Anomaly: I do not deny people the right to care about simulations in terms of their own terminal values. I only deny them the right to insist that I care about simulations. But I do claim the right to insist that other people care about Chinese, for reasons similar to those Tim has offered.
Simulated persons do have the ability to influence us informationally, though, even if they are unaware of our existence and don’t recognize their own status as simulations. I’m not sure what moral status I would assign to a simulated novelist—particularly if I liked his work.
But where’s the drama in that?
General Thud! Wake up! The aliens have landed. They have novels and want an agent!
Err, the point of having a decision theory that makes you go to war against extortionists is not to have war, but to have no extortionists. Of course you only want to do that against potential extortionists who can be “dissuaded”. Suffice it to say that the problem is not entirely solved, but the point is that it’s too early to say “let’s not care about simulated torture because otherwise we’ll have to give in to extortion” given that we seem to have decision theory approaches that still show promise of solving such problems without having to change our values.
caring about simulated torture appears to be cost without benefit.
Generally the benefit of caring about about any bad thing is that if you care about it there will be less of it because you will work to stop it.
Well, Fred cared, and his reaction was to propose exterminating humanity. I assume you think his is a wrong decision. Can you say why?
If you care about simulated torture (or simulated pleasure), and you’re willing to shut up and multiply, then anybody with a big enough computer can get you to do anything even when that computer has no inputs or outputs and makes absolutely no difference to the real world. I think it’s better to adjust oneself so one does not care. It’s not like it’s a well-tested human value that my ancestors on the savannah acted upon repeatedly.
If you care about simulated torture (or simulated pleasure), and you’re willing to shut up and multiply, then anybody with a big enough computer can get you to do anything even when that computer has no inputs or outputs and makes absolutely no difference to the real world.
Do your calculations and preferred choices change if instead of “simulations”, we’re talking about trillions of flesh-and-blood copies of human beings who are endlessly tortured to death and then revived to be tortured again? Even if they’re locked in rooms without entrances or exists, and it makes absolutely no difference to the outside world?
If you care about them, then anybody with a big enough copier-of-humans, and enough torture chambers “can get you to do anything”, as you say. So it’s not really an issue that depends on caring for simulations. I wish the concept of “simulations” wasn’t needlessly added where it has no necessity to be entered.
General Thud would possibly not care if it was the whole real-life population of China that got collected by the aliens, in exchange for a single village of Thud’s own nation.
The issue of how-to-deal-with-extortion is a hard one, but it’s just made fuzzier by adding the concept of simulations into the mix.
The issue of how-to-deal-with-extortion is a hard one, but it’s just made fuzzier by adding the concept of simulations into the mix.
I agree that it’s a fuzzy mix, but not the one you have in mind. I intended to talk about the practical issues around simulations, not about extortion.
Given that the aliens’ extortion attempt cost them almost nothing, there’s not much hope of gaming things to prevent it. Properly constructed, the black spheres would not have an audit trail leading back to the aliens’ home, so a competent extortionist could prevent any counterattack. Extortion is not an interesting part of this situation.
If you care about them, then anybody with a big enough copier-of-humans, and enough torture chambers “can get you to do anything”, as you say. So it’s not really an issue that depends on caring for simulations. I wish the concept of “simulations” wasn’t needlessly added where it has no necessity to be entered.
Right. It’s an issue about caring about things that are provably irrelevant to your day-to-day activities.
I intended to talk about the practical issues around simulations, not about extortion.
If you don’t want to be talking about extortion, we shouldn’t be talking about simulations in the context of extortion. So far as I can tell, the points you’ve made about useless preferences only matter in the context of extortion, where it doesn’t matter whether we’re talking about simulations or real people who have been created.
If it’s about caring about things that are irrelevant to your everyday life, then the average random person on the other side of the world honestly doesn’t matter much to you. They certainly wouldn’t have mattered a few hundred years ago. If you were transported to the 1300s, would you care about Native Americans? If so, why? If not, why are you focusing on the “simulation” part.
If it turns out that OUR universe is a simulation, I assume you do not consider our creators to have an obligation to consider our preferences?
Right. It’s an issue about caring about things that are provably irrelevant to your day-to-day activities.
Caring about those torturees feels a bit like being counterfactually mugged. Being the sort of person (or species) that doesn’t care about things that are provably irrelevant to your day-to-day activities would avoid this case of extortion, but depending on the universe that you are in, you might be giving up bigger positive opportunities.
Caring about those torturees feels a bit like being counterfactually mugged. Being the sort of person (or species) that doesn’t care about things that are provably irrelevant to your day-to-day activities would avoid this case of extortion, but depending on the universe that you are in, you might be giving up bigger positive opportunities.
I don’t understand yet. Can you give a more specific example?
The counterfactual mugging example paid off in dollars, which are typically shorthand for utility around here. Both utility and dollars are relevant to your day-to-day-activities, so the most straightforward interpretation of what you said doesn’t make sense to me.
Yes, its definitely not strictly a case of counterfactual mugging, it just struck me as having that flavor. I’ll see if I can be more specific.
At the point in time when you are confronted by omega in the counterfactual mugging scenario, there is provably no way in your day-to-day activities you will ever get anything for your $10 if you cough up. However, having the disposition to be counterfactually muggable is the winning move.
The analogy is that when deciding what our disposition should be with regards to caring about people we will never interact with, it might be the winning move to care, even if some of decision branches lead to bad outcomes.
The OP has a story where the caring outcome is bad. What about the other stories ? Like the one where everyone is living happily in their protected memory libertarian utopia until first contact day when a delegate from the galactic tourism board arrives and announces that he is blacklisting Earth because “I’ve seen some of the things you guys virtually do to each other and there’s no way I’m letting tourists transmit themselves over your networks”. And by the way he’s also enforcing a no-fly zone around the Earth until we “clean up our act”.
Imagine that, instead of simulations, the spheres contained actual people. They are much smaller, don’t have bodies the same shape, and can only seem to move and sense individual electrons, but they nonetheless exist in this universe.
It’s still exactly the same sphere.
In any case, you only answered the first question. Why must something exist forever for it to matter morally? It’s pretty integral to any debate about what exactly counts as “real” for this purpose.
Why must something exist forever for it to matter morally?
Fundamentally this is a discussion about preferences. My point is that having preferences unconnected to your own everyday life doesn’t promote survival or other outcomes you may want that are connected to your everyday life. In the long term the people we interact with will be the ones that win their everyday life, and in the shorter term the people who have power to do things will be the ones that win their everyday life. To the extent you get to choose your preferences, you get to choose whether you’ll be relevant to the game or not.
To answer your question, if something stops existing, it stops pertaining to anybody’s everyday life.
But fundamentally this conversation is broken. I really don’t care much about whether you like my preferences or whether you like me. Human preferences generally do not have guiding principles behind them, so asking me needling questions trying to find contradictions in my guiding principles is pointless. If, on the other hand, you proposed a different set of preferences, I might like them and consider adopting them. As you can tell, I don’t much like preferences that can be gamed to motivate people to exterminate their own species.
Good catch. Yes, I was deriving the absurdity from more than one assumption.
Maybe with the right decision algorithm you wouldn’t give in to extortions like this. However, this extortion attempt cost the aliens approximately nothing, so unless correctly inferring our decision algorithm cost them less than approximately nothing, the rational step for the aliens is to try the extortion regardless. Thus having a different decision algorithm probably wouldn’t prevent the extortion attempt.
But then changing your values to not care about simulated torture won’t prevent the extortion attempt either (since the aliens will think there’s a small chance you haven’t actually changed your values and it costs them nothing to try). Unless you already really just don’t care about simulated torture, it seems like you’d want to have a decision algorithm that makes you go to war against such extortionists (and not just ignore them).
Wait, is this a variant on Newcomb’s problem?
(Am I just slow today? Nobody else seems to have mentioned it outright, at least.)
This sort of thing is really the motivating example behind Newcomb’s problem.
I’m not seeing the analogy. Can you explain?
The extortion attempt cost the aliens almost nothing, and would have given them a vacant solar system to move into if someone like Fred was in power, so it’s rational for them to make the attempt almost regardless of the odds of succeeding. Nobody is reading anybody else’s mind here, except the idiots who read their own minds and uploaded them to the Internet, and they don’t seem to be making any of the choices.
This case looks most like the ‘transparent boxes’ version of the problem, which I haven’t read much about.
In Newcomb’s problem, Omega offers a larger amount of utility if you will predictably do something that intuitively would give a smaller amount of utility.
In this situation, being less open to blackmail probably gives you less disutility in the long run (fewer instances of people trying to blackmail you) than acceding to the blackmail, even though acceding intuitively gives you less disutility.
The other interesting part of this particular scenario is how to define ‘blackmail’ and differentiate it from, say, someone accidentally doing something that’s harmful to you and asking you to help fix it. We’ve approached that issue, too, but I’m not sure if it’s been given a thorough treatment yet.
They had other choices though. It would have been similarly inexpensive to offer to simulate happy people.
Even limiting the spheres to a single proof-of-concept would have been a start.
That ‘costs them nothing’ part makes a potentially big difference. That the aliens must pay to make their attempt is what gives your decision leverage. The war that you suggest is another way of ensuring that there is a cost. Even though you may actually lose the war and be exterminated.
(Obviously there are whole other scenarios where becoming a ‘protectorate’ and tithing rather than going to war constitutes a mutually beneficial cooperation. When their BATNA is just to wipe you out but it is slightly better for them to just let you pay them.)
I really don’t care about simulated torture, certainly not enough to prefer war over self-modification if simulated torture becomes an issue. War is very expensive and caring about simulated torture appears to be cost without benefit.
The story is consistent with this. Fred has problems because he cares about simulated torture, and Thud doesn’t care and doesn’t have problems.
Hmm, perhaps we agree that the story has only one source of absurdity now? No big deal either way.
(UDT is still worth my time to understand. I owe you that, and I didn’t get to it yet.)
If Fred cared about the aliens exterminating China, and Thud didn’t care; then if the aliens instead threatened to exterminate China, Fred would again have problems and Thud again wouldn’t have.
A rock doesn’t care about anything, and therefore it has no problems at all.
This topic isn’t really about simulation, it’s about the fact that caring about anything permits you to possibly sacrifice something else for it. Anything that isn’t our highest value may end up traded away, sure.
You can travel from here to China and back. Therefore, caring about China has at least a potential instrumental consequence on the rest of my life. You can’t travel from here to the aliens’ simulation and back, so caring about what happens there imposes costs on the rest of my life but no benefits. The analogy is not valid.
Now, if the black spheres had decent I/O capabilities and you could outsource human intellectual labor tasks to the simulations, I suppose it would make sense to care about what happens there. People can’t do useful work while they’re being tortured, so that wasn’t the scenario in the story.
That’s the only sane reason you believe can exist for caring about distant people at all? That you can potentially travel to them?
So if you’re a paraplegic , who doesn’t want to travel anywhere, can’t travel anywhere, and know you’ll die in two weeks anyway. You get a choice to push a button or not push it. If you push it you get 1 dollar right now, but 1 billion Chinese people will die horrible deaths in two weeks, after your own death.
Are you saying that the ONLY “sane” choice is to push the button, because you can use the dollar to buy bubblegum or something, while there’ll never be a consequence on you for having a billion Chinese die horrible deaths after your own death?
If so, your definition of sanity isn’t the definition most people have. You’re talking about the concept commonly called “selfishness”, not “sanity”.
Fine. Explain to me why Fred shouldn’t exterminate his species, or tell me that he should.
The extortion aspect isn’t essential. Fred could have been manipulated by true claims about making simulated people super happy.
ETA: At one point this comment had downvotes but no reply, but when I complained that that wasn’t a rational discussion, someone actually replied. LessWrong is doing what it’s supposed to do. Thanks people for making it and participating in it.
I would give in to the alien demands in that situation, assuming we ‘least convenient possible world’ away all externalities (the aliens might not keep their promise, there might be quadrillions of sentient beings in other species who we could save by stopping these aliens).
The way the story is told makes it easy for us to put ourselves in the shoes of Fred, Thud or anyone else on earth, and hard to put ourselves in the shoes of the simulations, faceless masses with no salient personality traits beyond foolishness. This combination brings out the scope insensitivity in people.
A better way to tell the story would be to spend 1000 times as many words describing the point of view of the simulations as that of the people on earth. I wonder how ridiculous giving in would seem then.
It’s good to have a variety of opinions on the mass-suicide issue. Thanks for posting.
IMO scope insensitivity is a good thing. If you don’t have scope insensitivity, then you have an unbounded utility function, and in the event you actually have to compute your utility function, you can’t effectively make decisions because the expected utility of your actions is a divergent sum. See Peter de Blanc’s paper and subsequent discussion here. If your values incorporate the sum of a divergent series, what happens in practice is that your opinion at the moment varies depending on whichever part of the divergent sum you’re paying attention to at the moment. Being vulnerable to Pascal’s Wager, Pascal’s Mugging, and having Fred’s mass-suicide preference in the story are all symptomatic of having unbounded utility functions. Shut Up and Multiply is just wrong, if you start from the assumption that scope insensitivity is bad.
ETA: I should probably have said “Shut Up and Multiply” is just wrong, if one of the things you’re multiplying by is the number of individuals affected.
If you truly believe this then I think we really do have no common ground. Scope insensitivity experiments have found that people will pay more to save one child than to save eight. Effectively, it will make you kill seven children and then make you pay for the privilege. It is, IMO, morally indefensible and probably responsible for a great deal of the pain and suffering going on as we speak.
A central tenet of the theory of rationality, as I understand it, is that decision theory, the computational methods used and even method of updating, are all up for grabs if a more convenient alternative is found, but the utility function is not.
My utility function is what I want. Rationality exists to help us achieve what we want, if it is ‘rational’ for me to change my utility function, then this ‘rationality’ is a spectacular failure and I will abandon it for an ‘irrational alternative’.
What you point out may be a problem, but the solution lies in decision theory or algorithmics, not in paying to kill children.
I agree that those people are confused in some way I do not understand.
Agreed. We need some way to reconcile that with the aforementioned result about scope-insensitive people who would pay the wrong price for saving one child verses saving eight.
True. I think your process for introspecting your utility function is wrong, and I think the procedure for inferring the utility function of the scope insensitive people who were thinking about saving one or eight children was flawed too, in a different way.
Humans can’t really have unbounded utility. The brain structures that represent those preferences have finite size, so they can’t intuit unbounded quantities. I believe you care some finite amount for all the rest of humanity, and the total amount you care asymptotically approaches that limit as the number of people involved increases to infinity. The marginal utility to you of the happiness of the trillionth person is approximately zero. Seriously, what does he add to the universe that the previous 999,999,999 didn’t already give enough of?
Straw man argument. You’re the only one who mentioned paying to kill children.
I’m afraid that backing away from the whole “one child over eight” think but standing by the rest of scope sensitivity doesn’t save you from killing. For example, if you value ten million people only twice as much as an a million then you can be persuaded to prefer 20% chance of death for 10 million over certain death for 1 million, which means, on average, condemning 1 million people to death.
Any utility function that does not assign utility to human life in direct proportion to the number of lives at stake is going to kill people in some scenarios.
I go with the ‘revealed preference’ theory of utility myself. I don’t think the human brain actually includes anything that looks like a utility function. Instead, it contains a bunch of pleasure pain drives, a bunch of emotional reactions independent of those drives, and something capable of reflecting on the former two and if necessary overriding them. Put together, under sufficient reflection, these form an agent that may act as if it had a utility function, but there’s no little counter in its brain that’s actually tracking utility.
Thus, the way to deduce things about my utility function is not to scan my brain, but to examine the choices I make and see what they reveal about my preferences. For example, I think that if faced with a choice between saving n people with certainty and a 99.9999% chance of saving 2n people, I would always pick the latter regardless of n (I may be wrong about this, I have never actually faced such a scenario for very large values of n). This proves mathematically that my utility function is unbounded in lives saved.
The merit of those alternatives depends on how many people total there are. If there are only 10 million people, I’d much rather have 1 million certain deaths than 20% chance of 10 million deaths, since we can repopulate from 8 million but we can’t repopulate from 0.
Even if condemning 1 million to death on the average is wrong when all options involve the possible deaths of large numbers of people, deriving positive utility from condemning random children to death when there’s no dilemma is an entirely different level of wrong. Utility as a function of lives should flatten out but not slope back down, assuming overpopulation isn’t an issue. The analogy isn’t valid. Let’s give up on the killing children example.
Yes! Agreed completely, in cases where doing the experiment is practical. All our scenarios seem to involve killing large numbers of people, so the experiment is not practical. I don’t see any reliable path forward—maybe we’re just stuck with not knowing what people prefer in those situations any time soon.
If 2n is the entire population, in one case we have 0 probability of ending up with 0 people and in the other case we have 0.0001% chance of losing the entire species all at once. So you seem to be more open to mass suicide than I’d like, even when there is no simulation or extortion involved. The other interpretation is that you’re introspecting incorrectly, and I hope that’s the case.
Someone voted your comment down. I don’t know why. I voted your comment up because it’s worth talking about, even though I disagree.
Okay, I guess the long term existence of the species does count as quite a significant externality, so in the case where 2n made up the whole species I probably would (I generally assume, unless stated otherwise, that both populations are negligible proportions of the species as a whole.
However, I don’t think humanity is a priori valuable, and if humanity now consists of 99.9% simulations being tortured then I think we really are better off dead.
It may be that, in a certain sense, one is more ‘wrong’ than the other. However, both amount to an intentional choice that more humans die, and I would say that if you value human life, they are equally poor choices.
How can 10 million humans not be 10 times as valuable as 1 million humans. How does the value of a person’s life depend on the number of other people in danger?
I’m inclined to say that my intuitions are probably fairly good on these sort of hypothetical scenarios, provided that the implications of my choices are quite distant and do not affect me personally (i.e. I would be more sceptical of my intuitions if I was in one of the groups).
I already answered that. The first few hundred survivors are much more valuable than the rest. Even if survival isn’t an issue, the trillionth human adds much less to what I value about humanity than the 100th human does.
I haven’t seen any argument for total utility being proportional to total number of people other than bald assertions. Do you have anything better than that?
It’s your choice whether you count those simulations as human or not. Be sure to be aware of having the choice, and to take responsibility for the choice you make.
You’re human and you’re saying that humanity is not a priori valuable? What?
I don’t have an absolute binding argument, just some intuitions. Some of these intuitions are:
It feels unfair to value human’s differently based on something as arbitrary as the order in which they are presented to me, or the number of other humans they are standing next to.
It seems probable to me that the humans in the group of 1 trillion would want to be treated equally to the humans in the group of 100.
It does not seem like there is anything different about the individual members of a group of 100 humans and a group of a trillion, either physically or mentally. They all still have the same amount of subjective experience, and I have a strong intuition that subjective experience has something very important to do with the value of human life.
It does not feel to me like I become less valuable when there are more other humans around, and it doesn’t seem like there’s anything special enough about me that this cannot be generalised.
It feels elegant, as a solution. Why should they become less valuable? Why not more valuable? Perhaps oscillating between two different values depending on parity? Perhaps some other even weirder function? A constant function at least has a certain symmetry to it.
These are just intuitions, they are convincing to me, but not to all possible minds. Are any of them convincing to you?
Is it also my choice whether I count black people or women as human?
In a trivial sense it is my choice, in that the laws of rationality do not forbid me from having any set of values I want. In a more realistic sense, it is not my choice, one option is obviously morally repugnant (to me at any rate) and I do not want it, I do not want to want it, I do not want to want to want it and so on ad infinitum (my values are in a state of reflective equilibrium on the question).
Humans are valuable. Humanity is valuable because it consists of humans, and has the capacity to create more. There is no explicit term in my utility for ‘humanity’ as distinct from the humans that make it up.
Odd, my intuitions are different. Taking the first example:
If I’m doing something special nobody else is doing and it needs to be done, then I’d better damn well get it done. If I’m standing next to a bunch of other humans doing the same thing, then I’m free! I can leave and nothing especially important happens. I am much less important to the entire enterprise in that case.
Be sure to watch the ongoing conversation at
http://lesswrong.com/lw/5te/a_summary_of_savages_foundations_for_probability/
because there’s a plausible axiomatic definition of probability and utility there from which one can apparently prove that utilities are bounded.
The instrumental value of a human may vary from one human to the next. It doesn’t seem to me like this should always go down though, for instance if you have roughly one doctor per every 200 people in you group then each doctor is roughly as instrumentally valuable whether the total number of people is 1 million or 1 billion.
But this is all besides the point, since I personally assign terminal value to humans, independent of any practical use they have (you can’t value everything only instrumentally, trying to do so leads to an infinite regress). I am also inclined to say that except in edge cases, this terminal value is significantly more important than any instrumental value a human may offer.
Coming back to the original discussion we see the following:
The simulations are doing no harm or good to anyone, so their only value is terminal.
The humans on earth are causing untold pain to huge numbers of sentient beings simply by breathing, and may also be doing other things. They have a terminal value, plus a huge negative instrumental value, plus a variety of other positive and negative instrumental values, which average out at not very much.
Yup, you really are on the pro-mass-suicide side of the issue. Whatever. Be sure to pay attention to the proof about bounded utility and figure out which of the premises you disagree with.
For the record, allow me to say that under the vast majority of possible circumstances I am strongly anti-mass-suicide.
To counter your comment, I accuse you of being pro-torture ;)
.
Well, it’s good to hear that neither of us are against anything, and are fundamentally positive, up-beat people. :-)
Sounds like a set-up for a debate: “Would you like to take the pro-mass-suicide point of view, or the pro-torture point of view?”
Heavily for most people—due to scope insensitivity. Saving 1 person makes you a hero. Saving a million people does not produce a million times the effect. Thus the size sensitivity.
I am aware that it happens. I’m just saying that it shouldn’t. I’m making the case that this intuition does not fit in reflective equilibrium with our others, and should be scrapped.
Heavily for most people—due to scope insensitivity. Saving 1 person makes you a hero. Saving a million people does not produce a million times the effect. Thus the size sensitivity.
Yeah, just realized that I’ve just written, in a place where anyone who wants can see, that I would be prepared to wipe out humanity. :O
I’m just lucky that nobody really cares enough to search the entire internet for incriminating comments. :)
I’m not the person who downvoted you, but I suspect the reason was that when you said this:
You implied that it’s wrong or nonsensical to care about other people’s happiness/absence of suffering as a terminal value. We are “allowed” to have whatever terminal values we want, except perhaps contradictory ones.
That’s presumably why he said “my.”
I don’t know what it means for a person to be simulated. I don’t know if the simulated people have consciousness. Are we talking about people whose existence feels as real to themselves as it would to us? This is NOT an assumption I ever make about simulations, but should I consider it so for the sake of the argument?
If their experience doesn’t feel real to themselves, then obviously there isn’t any reason to care about what makes them happy or unhappy, that would be Fred being confused, as he conflates the experience of real people with the fundamentally different simulated people.
If their internal experience is as real as ours, then obviously it wouldn’t be the extermination of Fred’s species, some of his species would survive in the simulations, if in eternal captivity.
He should or shouldn’t exterminate his flesh-and-blood species based on whether his utility function assigns a higher value to a free (and aliive) humanity, than to a trillion of individual sentients being happy.
On my part, I’d choose for a free and alive humanity still. But that’s an issue that depends on what terminal values we each have.
Um, I never tried to define sanity. What are you responding to?
Apologies, I did indeed misremember who it was that was talking about “crazy notions”, that was indeed Perplexed.
You seem to be collecting some downvotes that should have gone to me. To even things out, I have upvoted three of your comments. Feel free to downvote three of mine.
I fully agree, by the way, on the distinction between the moral relevance of simulated humans, who have no ability to physically influence our world, and the moral relevance of distant people here on earth, who physically influence us daily (though indirectly through a chain of intermediary agents).
Simulated persons do have the ability to influence us informationally, though, even if they are unaware of our existence and don’t recognize their own status as simulations. I’m not sure what moral status I would assign to a simulated novelist—particularly if I liked his work.
ETA: To Normal_Anomaly: I do not deny people the right to care about simulations in terms of their own terminal values. I only deny them the right to insist that I care about simulations. But I do claim the right to insist that other people care about Chinese, for reasons similar to those Tim has offered.
But where’s the drama in that?
:-)
Relevant to your interests, possibly.
Thanks! Cute story.
Err, the point of having a decision theory that makes you go to war against extortionists is not to have war, but to have no extortionists. Of course you only want to do that against potential extortionists who can be “dissuaded”. Suffice it to say that the problem is not entirely solved, but the point is that it’s too early to say “let’s not care about simulated torture because otherwise we’ll have to give in to extortion” given that we seem to have decision theory approaches that still show promise of solving such problems without having to change our values.
Generally the benefit of caring about about any bad thing is that if you care about it there will be less of it because you will work to stop it.
Well, Fred cared, and his reaction was to propose exterminating humanity. I assume you think his is a wrong decision. Can you say why?
If you care about simulated torture (or simulated pleasure), and you’re willing to shut up and multiply, then anybody with a big enough computer can get you to do anything even when that computer has no inputs or outputs and makes absolutely no difference to the real world. I think it’s better to adjust oneself so one does not care. It’s not like it’s a well-tested human value that my ancestors on the savannah acted upon repeatedly.
Do your calculations and preferred choices change if instead of “simulations”, we’re talking about trillions of flesh-and-blood copies of human beings who are endlessly tortured to death and then revived to be tortured again? Even if they’re locked in rooms without entrances or exists, and it makes absolutely no difference to the outside world?
If you care about them, then anybody with a big enough copier-of-humans, and enough torture chambers “can get you to do anything”, as you say. So it’s not really an issue that depends on caring for simulations. I wish the concept of “simulations” wasn’t needlessly added where it has no necessity to be entered.
General Thud would possibly not care if it was the whole real-life population of China that got collected by the aliens, in exchange for a single village of Thud’s own nation.
The issue of how-to-deal-with-extortion is a hard one, but it’s just made fuzzier by adding the concept of simulations into the mix.
I agree that it’s a fuzzy mix, but not the one you have in mind. I intended to talk about the practical issues around simulations, not about extortion.
Given that the aliens’ extortion attempt cost them almost nothing, there’s not much hope of gaming things to prevent it. Properly constructed, the black spheres would not have an audit trail leading back to the aliens’ home, so a competent extortionist could prevent any counterattack. Extortion is not an interesting part of this situation.
Right. It’s an issue about caring about things that are provably irrelevant to your day-to-day activities.
If you don’t want to be talking about extortion, we shouldn’t be talking about simulations in the context of extortion. So far as I can tell, the points you’ve made about useless preferences only matter in the context of extortion, where it doesn’t matter whether we’re talking about simulations or real people who have been created.
If it’s about caring about things that are irrelevant to your everyday life, then the average random person on the other side of the world honestly doesn’t matter much to you. They certainly wouldn’t have mattered a few hundred years ago. If you were transported to the 1300s, would you care about Native Americans? If so, why? If not, why are you focusing on the “simulation” part.
If it turns out that OUR universe is a simulation, I assume you do not consider our creators to have an obligation to consider our preferences?
Caring about those torturees feels a bit like being counterfactually mugged. Being the sort of person (or species) that doesn’t care about things that are provably irrelevant to your day-to-day activities would avoid this case of extortion, but depending on the universe that you are in, you might be giving up bigger positive opportunities.
The primary similarity seems to only be that the logic in question gives results which clash with our moral intuition.
I don’t understand yet. Can you give a more specific example?
The counterfactual mugging example paid off in dollars, which are typically shorthand for utility around here. Both utility and dollars are relevant to your day-to-day-activities, so the most straightforward interpretation of what you said doesn’t make sense to me.
Yes, its definitely not strictly a case of counterfactual mugging, it just struck me as having that flavor. I’ll see if I can be more specific.
At the point in time when you are confronted by omega in the counterfactual mugging scenario, there is provably no way in your day-to-day activities you will ever get anything for your $10 if you cough up. However, having the disposition to be counterfactually muggable is the winning move.
The analogy is that when deciding what our disposition should be with regards to caring about people we will never interact with, it might be the winning move to care, even if some of decision branches lead to bad outcomes.
The OP has a story where the caring outcome is bad. What about the other stories ? Like the one where everyone is living happily in their protected memory libertarian utopia until first contact day when a delegate from the galactic tourism board arrives and announces that he is blacklisting Earth because “I’ve seen some of the things you guys virtually do to each other and there’s no way I’m letting tourists transmit themselves over your networks”. And by the way he’s also enforcing a no-fly zone around the Earth until we “clean up our act”.
That sounds like a flaw in the decision theory. What kind of broken decision theory achieves its values better by optimizing for different values?
What do you mean by “the real world”? Why does it matter if it’s “real”?
The real world generally doesn’t get turned off. Simulations generally do. That’s why it matters.
If there were a simulation that one might reasonably expect to run forever, it might make sense to debate the issue.
Imagine that, instead of simulations, the spheres contained actual people. They are much smaller, don’t have bodies the same shape, and can only seem to move and sense individual electrons, but they nonetheless exist in this universe.
It’s still exactly the same sphere.
In any case, you only answered the first question. Why must something exist forever for it to matter morally? It’s pretty integral to any debate about what exactly counts as “real” for this purpose.
Fundamentally this is a discussion about preferences. My point is that having preferences unconnected to your own everyday life doesn’t promote survival or other outcomes you may want that are connected to your everyday life. In the long term the people we interact with will be the ones that win their everyday life, and in the shorter term the people who have power to do things will be the ones that win their everyday life. To the extent you get to choose your preferences, you get to choose whether you’ll be relevant to the game or not.
To answer your question, if something stops existing, it stops pertaining to anybody’s everyday life.
But fundamentally this conversation is broken. I really don’t care much about whether you like my preferences or whether you like me. Human preferences generally do not have guiding principles behind them, so asking me needling questions trying to find contradictions in my guiding principles is pointless. If, on the other hand, you proposed a different set of preferences, I might like them and consider adopting them. As you can tell, I don’t much like preferences that can be gamed to motivate people to exterminate their own species.
I thought this post was an attempt to argue for your set of preferences. If not, what is it?
It was an attempt to answer the question you asked and to indicate a potentially useful thing to talk about instead.