if you really could lock everyone who wanted one into their own personal optimal-experience simulator, where the only catch was that it wasn’t “real,” and we stipulate that the machine works as advertised, I’d sign up without much hesitation.
Can I ask you a hypothetical? Say you’re Abe Lincoln, and you’re planning to free the slaves (or insert your favorite historical example of some profoundly good action). Now, suppose you have good reason to think this will be very difficult and involve a costly war. But someone has recently built an experience machine, and you’re sure that within the experience machine you could free the slaves without any trouble: an easy and bloodless emancipation would make you happier, and that’s what the experience machine will get you.
So on the principle that you should take the most efficient means to your end, should you just go into the experience machine for life, and declare the (virtual) slaves free? Or is this not actually a means to your end?
Presumably that’s where the “everyone who wants one” caveat comes in. Most people aren’t Abe Lincoln and, as far as I can tell, don’t really care whether their actions have any significant effect on people outside their social circle. They probably wouldn’t care too much about that social circle being real vs. simulated, either—as much as I like the individuals in my current social circle, if I was starting from zero rather than replacing them, I wouldn’t mind ending up with a fully simulated social circle so long as it was similarly engaging / persistent / etc.
if I was starting from zero rather than replacing them, I wouldn’t mind ending up with a fully simulated social circle so long as it was similarly engaging / persistent / etc.
And suppose I rephrased it thus: your friend needs help say, getting through a painful divorce, and you knew that this will be a difficult process taking many years. But you also know that if you put yourself in an experience machine for the rest of your life, you could soothe your (virtual) friend’s wounded soul in half an hour. Supposing the move to the experience machine doesn’t interfere with any of your other plans (they could be simulated too, of course), would you consider the experience machine simply a more efficient means to your end? Or would it fail to achieve your end at all?
I would consider my goal not accomplished at all...it seems to be one of the basic tenets of my value system that other people exist and one other person is just as valuable as me, therefore one of my responsibilities in life is to help other people. I am very, very leery of futures where I would end up in a virtual world–if they’re simulated deeply enough to be sentient beings, I would care about them as well, but I would be abandoning any chances of influencing the fate of everyone else.
I think maybe I could justify it to myself if I knew in advance, and could prove to myself, that everyone else was also ending up in a virtual world where they would be even happier… But the idea still makes me uncomfortable. I think it seems like ‘cheating at life’, somehow, taking the easy road out. Although that’s probably a random emotional prejudice more than a logical objection.
I think this is exactly right. While I share your lack of understanding of exactly why the machine would be unsatisfying here, I don’t think it’s a random emotional prejudice. I think we’re on to something.
That is sort of an illogical question. What it boils down to is “Is your goal to feel like you are helping somebody, or is your goal to help somebody whom you are actually emotionally attached to?” If I agree with the former, then aside from realizing I have some pretty vapid and pointless goals, I’d get in the machine. But if I had the clarity to realize my part in that goal system, helping others probably wouldn’t be high on my list of things to do. I would get in without a glance backward, and start thinking up something interesting. If I genuinely believed that I cared about them and got in the box anyway, do I really qualify as a sentient being? The machine might as well be a meat-grinder in that case.
If I genuinely cared about the person in question, I would realize that with me inside the machine, he would still be suffering, and my social programming would not easily allow me to deviate from the “right thing to do”, and I would refuse to get in.
If my actual friend is actually hurting, my goal is to actually fix that; a simulation of the individual isn’t relevant to that. But I don’t care very much about people who aren’t my friends, in most cases, so if it’s a choice of becoming friends with real person Alex, who might get hurt and need support that I can’t give, or simulated person Abe, who won’t present me with any problems that I can’t actually solve, I might well choose Abe, so long as Abe is as interesting as Alex in every other way.
My point was just that we would resist the experience machine if we took ourselves to have ethical obligations or a chance to do something good in the world we live in. ‘Real’ isn’t quite the issue here, since if you started in the experience machine you might justifiably want to stay there instead of moving to another one or into the real world.
In other words, over and above our experience of the world, the world we’ve been living in has a basic ethical importance for us. We wouldn’t give it up for just anything.
With my friend’s permission, I’d rent a two-seater experience machine for a month or so, sit myself down in the administrator position, and use it to play through various scenarios calculated to be useful for my friend’s psychological well-being.
What the hell kind of a utopia only has a holodeck with a one-way door?
Fighting the hypothetical is a legitimate tactic when there’s a contradiction in the hypothetical premises. In this case, we’re assuming a world where people have learned to create perfectly immersive virtual environments, but somehow forgotten how to charge money for valuable services or build a power switch that works on a timer, which seems contradictory based on what I know about technological development.
That’s irrelevant to the question (hence, a case of fighting the hypothetical). Mass Driver said he would enter into an experience machine permanently (that’s how I took the word ‘lock’) without much hesitation, if the machine-world were better but unreal. The purpose of my hypothetical was to show that while ‘real’ isn’t quite the issue, there is something about one’s own world that we’re ethically attached to. And we’re attached in such a way that an experientially identical world which is different only in being not our original world, is for that reason significantly different.
Can I ask you a hypothetical? Say you’re Abe Lincoln, and you’re planning to free the slaves (or insert your favorite historical example of some profoundly good action). Now, suppose you have good reason to think this will be very difficult and involve a costly war. But someone has recently built an experience machine, and you’re sure that within the experience machine you could free the slaves without any trouble: an easy and bloodless emancipation would make you happier, and that’s what the experience machine will get you.
So on the principle that you should take the most efficient means to your end, should you just go into the experience machine for life, and declare the (virtual) slaves free? Or is this not actually a means to your end?
Presumably that’s where the “everyone who wants one” caveat comes in. Most people aren’t Abe Lincoln and, as far as I can tell, don’t really care whether their actions have any significant effect on people outside their social circle. They probably wouldn’t care too much about that social circle being real vs. simulated, either—as much as I like the individuals in my current social circle, if I was starting from zero rather than replacing them, I wouldn’t mind ending up with a fully simulated social circle so long as it was similarly engaging / persistent / etc.
Well, how would you answer my hypothetical?
And suppose I rephrased it thus: your friend needs help say, getting through a painful divorce, and you knew that this will be a difficult process taking many years. But you also know that if you put yourself in an experience machine for the rest of your life, you could soothe your (virtual) friend’s wounded soul in half an hour. Supposing the move to the experience machine doesn’t interfere with any of your other plans (they could be simulated too, of course), would you consider the experience machine simply a more efficient means to your end? Or would it fail to achieve your end at all?
I would consider my goal not accomplished at all...it seems to be one of the basic tenets of my value system that other people exist and one other person is just as valuable as me, therefore one of my responsibilities in life is to help other people. I am very, very leery of futures where I would end up in a virtual world–if they’re simulated deeply enough to be sentient beings, I would care about them as well, but I would be abandoning any chances of influencing the fate of everyone else.
I think maybe I could justify it to myself if I knew in advance, and could prove to myself, that everyone else was also ending up in a virtual world where they would be even happier… But the idea still makes me uncomfortable. I think it seems like ‘cheating at life’, somehow, taking the easy road out. Although that’s probably a random emotional prejudice more than a logical objection.
I think this is exactly right. While I share your lack of understanding of exactly why the machine would be unsatisfying here, I don’t think it’s a random emotional prejudice. I think we’re on to something.
That is sort of an illogical question. What it boils down to is “Is your goal to feel like you are helping somebody, or is your goal to help somebody whom you are actually emotionally attached to?” If I agree with the former, then aside from realizing I have some pretty vapid and pointless goals, I’d get in the machine. But if I had the clarity to realize my part in that goal system, helping others probably wouldn’t be high on my list of things to do. I would get in without a glance backward, and start thinking up something interesting. If I genuinely believed that I cared about them and got in the box anyway, do I really qualify as a sentient being? The machine might as well be a meat-grinder in that case.
If I genuinely cared about the person in question, I would realize that with me inside the machine, he would still be suffering, and my social programming would not easily allow me to deviate from the “right thing to do”, and I would refuse to get in.
If my actual friend is actually hurting, my goal is to actually fix that; a simulation of the individual isn’t relevant to that. But I don’t care very much about people who aren’t my friends, in most cases, so if it’s a choice of becoming friends with real person Alex, who might get hurt and need support that I can’t give, or simulated person Abe, who won’t present me with any problems that I can’t actually solve, I might well choose Abe, so long as Abe is as interesting as Alex in every other way.
My point was just that we would resist the experience machine if we took ourselves to have ethical obligations or a chance to do something good in the world we live in. ‘Real’ isn’t quite the issue here, since if you started in the experience machine you might justifiably want to stay there instead of moving to another one or into the real world.
In other words, over and above our experience of the world, the world we’ve been living in has a basic ethical importance for us. We wouldn’t give it up for just anything.
With my friend’s permission, I’d rent a two-seater experience machine for a month or so, sit myself down in the administrator position, and use it to play through various scenarios calculated to be useful for my friend’s psychological well-being.
What the hell kind of a utopia only has a holodeck with a one-way door?
Don’t fight the hypothetical!
Fighting the hypothetical is a legitimate tactic when there’s a contradiction in the hypothetical premises. In this case, we’re assuming a world where people have learned to create perfectly immersive virtual environments, but somehow forgotten how to charge money for valuable services or build a power switch that works on a timer, which seems contradictory based on what I know about technological development.
That’s irrelevant to the question (hence, a case of fighting the hypothetical). Mass Driver said he would enter into an experience machine permanently (that’s how I took the word ‘lock’) without much hesitation, if the machine-world were better but unreal. The purpose of my hypothetical was to show that while ‘real’ isn’t quite the issue, there is something about one’s own world that we’re ethically attached to. And we’re attached in such a way that an experientially identical world which is different only in being not our original world, is for that reason significantly different.