caring about simulated torture appears to be cost without benefit.
Generally the benefit of caring about about any bad thing is that if you care about it there will be less of it because you will work to stop it.
Well, Fred cared, and his reaction was to propose exterminating humanity. I assume you think his is a wrong decision. Can you say why?
If you care about simulated torture (or simulated pleasure), and you’re willing to shut up and multiply, then anybody with a big enough computer can get you to do anything even when that computer has no inputs or outputs and makes absolutely no difference to the real world. I think it’s better to adjust oneself so one does not care. It’s not like it’s a well-tested human value that my ancestors on the savannah acted upon repeatedly.
If you care about simulated torture (or simulated pleasure), and you’re willing to shut up and multiply, then anybody with a big enough computer can get you to do anything even when that computer has no inputs or outputs and makes absolutely no difference to the real world.
Do your calculations and preferred choices change if instead of “simulations”, we’re talking about trillions of flesh-and-blood copies of human beings who are endlessly tortured to death and then revived to be tortured again? Even if they’re locked in rooms without entrances or exists, and it makes absolutely no difference to the outside world?
If you care about them, then anybody with a big enough copier-of-humans, and enough torture chambers “can get you to do anything”, as you say. So it’s not really an issue that depends on caring for simulations. I wish the concept of “simulations” wasn’t needlessly added where it has no necessity to be entered.
General Thud would possibly not care if it was the whole real-life population of China that got collected by the aliens, in exchange for a single village of Thud’s own nation.
The issue of how-to-deal-with-extortion is a hard one, but it’s just made fuzzier by adding the concept of simulations into the mix.
The issue of how-to-deal-with-extortion is a hard one, but it’s just made fuzzier by adding the concept of simulations into the mix.
I agree that it’s a fuzzy mix, but not the one you have in mind. I intended to talk about the practical issues around simulations, not about extortion.
Given that the aliens’ extortion attempt cost them almost nothing, there’s not much hope of gaming things to prevent it. Properly constructed, the black spheres would not have an audit trail leading back to the aliens’ home, so a competent extortionist could prevent any counterattack. Extortion is not an interesting part of this situation.
If you care about them, then anybody with a big enough copier-of-humans, and enough torture chambers “can get you to do anything”, as you say. So it’s not really an issue that depends on caring for simulations. I wish the concept of “simulations” wasn’t needlessly added where it has no necessity to be entered.
Right. It’s an issue about caring about things that are provably irrelevant to your day-to-day activities.
I intended to talk about the practical issues around simulations, not about extortion.
If you don’t want to be talking about extortion, we shouldn’t be talking about simulations in the context of extortion. So far as I can tell, the points you’ve made about useless preferences only matter in the context of extortion, where it doesn’t matter whether we’re talking about simulations or real people who have been created.
If it’s about caring about things that are irrelevant to your everyday life, then the average random person on the other side of the world honestly doesn’t matter much to you. They certainly wouldn’t have mattered a few hundred years ago. If you were transported to the 1300s, would you care about Native Americans? If so, why? If not, why are you focusing on the “simulation” part.
If it turns out that OUR universe is a simulation, I assume you do not consider our creators to have an obligation to consider our preferences?
Right. It’s an issue about caring about things that are provably irrelevant to your day-to-day activities.
Caring about those torturees feels a bit like being counterfactually mugged. Being the sort of person (or species) that doesn’t care about things that are provably irrelevant to your day-to-day activities would avoid this case of extortion, but depending on the universe that you are in, you might be giving up bigger positive opportunities.
Caring about those torturees feels a bit like being counterfactually mugged. Being the sort of person (or species) that doesn’t care about things that are provably irrelevant to your day-to-day activities would avoid this case of extortion, but depending on the universe that you are in, you might be giving up bigger positive opportunities.
I don’t understand yet. Can you give a more specific example?
The counterfactual mugging example paid off in dollars, which are typically shorthand for utility around here. Both utility and dollars are relevant to your day-to-day-activities, so the most straightforward interpretation of what you said doesn’t make sense to me.
Yes, its definitely not strictly a case of counterfactual mugging, it just struck me as having that flavor. I’ll see if I can be more specific.
At the point in time when you are confronted by omega in the counterfactual mugging scenario, there is provably no way in your day-to-day activities you will ever get anything for your $10 if you cough up. However, having the disposition to be counterfactually muggable is the winning move.
The analogy is that when deciding what our disposition should be with regards to caring about people we will never interact with, it might be the winning move to care, even if some of decision branches lead to bad outcomes.
The OP has a story where the caring outcome is bad. What about the other stories ? Like the one where everyone is living happily in their protected memory libertarian utopia until first contact day when a delegate from the galactic tourism board arrives and announces that he is blacklisting Earth because “I’ve seen some of the things you guys virtually do to each other and there’s no way I’m letting tourists transmit themselves over your networks”. And by the way he’s also enforcing a no-fly zone around the Earth until we “clean up our act”.
Imagine that, instead of simulations, the spheres contained actual people. They are much smaller, don’t have bodies the same shape, and can only seem to move and sense individual electrons, but they nonetheless exist in this universe.
It’s still exactly the same sphere.
In any case, you only answered the first question. Why must something exist forever for it to matter morally? It’s pretty integral to any debate about what exactly counts as “real” for this purpose.
Why must something exist forever for it to matter morally?
Fundamentally this is a discussion about preferences. My point is that having preferences unconnected to your own everyday life doesn’t promote survival or other outcomes you may want that are connected to your everyday life. In the long term the people we interact with will be the ones that win their everyday life, and in the shorter term the people who have power to do things will be the ones that win their everyday life. To the extent you get to choose your preferences, you get to choose whether you’ll be relevant to the game or not.
To answer your question, if something stops existing, it stops pertaining to anybody’s everyday life.
But fundamentally this conversation is broken. I really don’t care much about whether you like my preferences or whether you like me. Human preferences generally do not have guiding principles behind them, so asking me needling questions trying to find contradictions in my guiding principles is pointless. If, on the other hand, you proposed a different set of preferences, I might like them and consider adopting them. As you can tell, I don’t much like preferences that can be gamed to motivate people to exterminate their own species.
Well, Fred cared, and his reaction was to propose exterminating humanity. I assume you think his is a wrong decision. Can you say why?
If you care about simulated torture (or simulated pleasure), and you’re willing to shut up and multiply, then anybody with a big enough computer can get you to do anything even when that computer has no inputs or outputs and makes absolutely no difference to the real world. I think it’s better to adjust oneself so one does not care. It’s not like it’s a well-tested human value that my ancestors on the savannah acted upon repeatedly.
Do your calculations and preferred choices change if instead of “simulations”, we’re talking about trillions of flesh-and-blood copies of human beings who are endlessly tortured to death and then revived to be tortured again? Even if they’re locked in rooms without entrances or exists, and it makes absolutely no difference to the outside world?
If you care about them, then anybody with a big enough copier-of-humans, and enough torture chambers “can get you to do anything”, as you say. So it’s not really an issue that depends on caring for simulations. I wish the concept of “simulations” wasn’t needlessly added where it has no necessity to be entered.
General Thud would possibly not care if it was the whole real-life population of China that got collected by the aliens, in exchange for a single village of Thud’s own nation.
The issue of how-to-deal-with-extortion is a hard one, but it’s just made fuzzier by adding the concept of simulations into the mix.
I agree that it’s a fuzzy mix, but not the one you have in mind. I intended to talk about the practical issues around simulations, not about extortion.
Given that the aliens’ extortion attempt cost them almost nothing, there’s not much hope of gaming things to prevent it. Properly constructed, the black spheres would not have an audit trail leading back to the aliens’ home, so a competent extortionist could prevent any counterattack. Extortion is not an interesting part of this situation.
Right. It’s an issue about caring about things that are provably irrelevant to your day-to-day activities.
If you don’t want to be talking about extortion, we shouldn’t be talking about simulations in the context of extortion. So far as I can tell, the points you’ve made about useless preferences only matter in the context of extortion, where it doesn’t matter whether we’re talking about simulations or real people who have been created.
If it’s about caring about things that are irrelevant to your everyday life, then the average random person on the other side of the world honestly doesn’t matter much to you. They certainly wouldn’t have mattered a few hundred years ago. If you were transported to the 1300s, would you care about Native Americans? If so, why? If not, why are you focusing on the “simulation” part.
If it turns out that OUR universe is a simulation, I assume you do not consider our creators to have an obligation to consider our preferences?
Caring about those torturees feels a bit like being counterfactually mugged. Being the sort of person (or species) that doesn’t care about things that are provably irrelevant to your day-to-day activities would avoid this case of extortion, but depending on the universe that you are in, you might be giving up bigger positive opportunities.
The primary similarity seems to only be that the logic in question gives results which clash with our moral intuition.
I don’t understand yet. Can you give a more specific example?
The counterfactual mugging example paid off in dollars, which are typically shorthand for utility around here. Both utility and dollars are relevant to your day-to-day-activities, so the most straightforward interpretation of what you said doesn’t make sense to me.
Yes, its definitely not strictly a case of counterfactual mugging, it just struck me as having that flavor. I’ll see if I can be more specific.
At the point in time when you are confronted by omega in the counterfactual mugging scenario, there is provably no way in your day-to-day activities you will ever get anything for your $10 if you cough up. However, having the disposition to be counterfactually muggable is the winning move.
The analogy is that when deciding what our disposition should be with regards to caring about people we will never interact with, it might be the winning move to care, even if some of decision branches lead to bad outcomes.
The OP has a story where the caring outcome is bad. What about the other stories ? Like the one where everyone is living happily in their protected memory libertarian utopia until first contact day when a delegate from the galactic tourism board arrives and announces that he is blacklisting Earth because “I’ve seen some of the things you guys virtually do to each other and there’s no way I’m letting tourists transmit themselves over your networks”. And by the way he’s also enforcing a no-fly zone around the Earth until we “clean up our act”.
That sounds like a flaw in the decision theory. What kind of broken decision theory achieves its values better by optimizing for different values?
What do you mean by “the real world”? Why does it matter if it’s “real”?
The real world generally doesn’t get turned off. Simulations generally do. That’s why it matters.
If there were a simulation that one might reasonably expect to run forever, it might make sense to debate the issue.
Imagine that, instead of simulations, the spheres contained actual people. They are much smaller, don’t have bodies the same shape, and can only seem to move and sense individual electrons, but they nonetheless exist in this universe.
It’s still exactly the same sphere.
In any case, you only answered the first question. Why must something exist forever for it to matter morally? It’s pretty integral to any debate about what exactly counts as “real” for this purpose.
Fundamentally this is a discussion about preferences. My point is that having preferences unconnected to your own everyday life doesn’t promote survival or other outcomes you may want that are connected to your everyday life. In the long term the people we interact with will be the ones that win their everyday life, and in the shorter term the people who have power to do things will be the ones that win their everyday life. To the extent you get to choose your preferences, you get to choose whether you’ll be relevant to the game or not.
To answer your question, if something stops existing, it stops pertaining to anybody’s everyday life.
But fundamentally this conversation is broken. I really don’t care much about whether you like my preferences or whether you like me. Human preferences generally do not have guiding principles behind them, so asking me needling questions trying to find contradictions in my guiding principles is pointless. If, on the other hand, you proposed a different set of preferences, I might like them and consider adopting them. As you can tell, I don’t much like preferences that can be gamed to motivate people to exterminate their own species.
I thought this post was an attempt to argue for your set of preferences. If not, what is it?
It was an attempt to answer the question you asked and to indicate a potentially useful thing to talk about instead.