I think most people agree about the importance of “the substrate universe” whether that universe is this one, or actually higher than our own. But suppose the we argued against a more compelling reconstruction of the proposal by modifying the experience machine in various ways? The original post did the opposite of course—removing the off button in a gratuitous way that highlights the loss (rather than extension) of autonomy. Maybe if we repair the experience box too much it stops functioning as the same puzzle, but I don’t see how an obviously broken box is that helpful an intuition pump.
For example, rather than just giving me plain old physics inside the machine, the Matrix experience of those who knew they were in the matrix seemed nice: astonishing physical grace, the ability to fly and walk on walls, and access to tools and environments of one’s choosing. Then you could graft on the good parts from Diaspora so going into the box automatically comes with effective immortality, faster subjective thinking processes, real time access to all the digitally accessible data of human civilization, and the ability to examine and cautiously optimize the algorithms of one’s own mind using an “exoself” to adjust your “endoself” (so that you could, for example, edit addictions out of your psychological makeup except when you wanted to go on a “psychosis vacation”).
And I’d also want to have a say in how human civilization progressed. If there were environmental/astronomical catastrophes I’d want to make sure they were either prevented or at least that people’s simulators were safely evacuated. If we could build the kinds of simulators I’m talking about then people in simulators could probably build and teleoperate all kinds of neat machinery for emergencies, repair of the experience machines, space exploration, and so on.
Another argument against experience machines is sometimes that they wouldn’t be as “challenging” as the real world because you’d be in a “merely man made” world… but the proper response is simply to augment the machine so that it offers more challenges and more meaningful challenges than mere reality—for example, the environments you could call up to give you arbitrary levels of challenge might be calibrated to be “slightly beyond your abilities about 50% of the time but always educational and fun”.
Spending time in one of these improved experience machines would be way better than, say, spending the equivalent time in college, because mere college graduates would pale in comparison to people who’d spent the same four years gaining subjective centuries of hands on experience dealing with issues whose “challenge modes” were vastly more complex puzzles than most of the learning opportunities on our boring planet. Even for equivalent subjective time, I think the experience machines would be better, because they’d be calibrated precisely to the person with no worries about educational economies of scale… instead of lectures, conversations… instead of case studies, simulations… and so on.
The only intelligible arguments against the original “straw man” experience machine (though perhaps there are others I’m not clever enough to notice) that remain compelling to me after repairing the design of the machine, are focused on social relationships.
First, one of the greatest challenges in the human environment is other humans. If you’re setting up an experience machine scenario with a sliding scale of challenge, where to you get the characters from? Do you just “fabricate” the facade of a someone who presents a particular kind of coordination challenge due to their difficult personal quirks? If you’re going to simulate conflict, do you just “fabricate” enemies? And hurt them? Where do all these people come from and what is the moral significance of their existence? Not being distressed by this is probably a character defect, but the alternative seems to involve inevitable distress.
And then on the other side of the coin, there are many people who I love as friends or family, even though they are not physically gorgeous, fully self actualized, passionately moral, polymath “greek gods”. Which is probably a lucky thing, because neither am I :-P
But if they refused to enter repaired experience machines (networked, of course, so we could hang out anytime we wanted) the only way I could interact with them would be through an avatar in the substrate world where they were plodding along without the same growth opportunities. Would I eventually see them as grossly incapacitated caricatures of what humans are truly capable of? How much distress would that cause? Or suppose they opted in and then got vastly more out of their experience machine than I got out of mine? Would I feel inferior? Would I need to be protected from the awareness of my inferiority for my own good? Would they feel sorry for me? Would they need to be protected from my disappointing-ness? Would we all just drift apart, putting “facade interfaces” between each other, so everyone’s understanding of other people drifted farther and farther out of calibration—me appearing better than actual to them and them worse than actual to me?
And then if something in the external universe supporting our experience machines posed a real challenge that involves actual choices we’re back to the political challenges around coordinating with other people where the stakes are authentic and substantial. We’d probably debate from inside the experience boxes about what the world manipulation machines should do, and the arguments would inevitably carry some measure of distress for any “losing factions”.
It is precisely the existence of morally significant “non-me entities” that creates challenges that I don’t see how to avoid under any variety of experience machine. It’s not that I particularly care whether my desk is real or not—its that I care that my family is real.
Given the state of human technology, one could argue that human civilization (especially in the developed world, and hopefully for everyone within a few decades) is already in something reasonably close to an optimal experience machine. We have video games. We have reasonable material comfort. We have raw NASA data online. We can cross our fingers and somewhat reasonably imagine technology improving medical care to cure death and stupidity… But the thing we may never have a solution to is the existence of people we care about, who are not exactly as they would be if their primary concern was our own happiness, while recognizing we are constrained in similar ways, especially when we care about multiple people who want different things for us.
Unless… what if much of the challenges in politics and social interactions happen because people in general are so defective? If my blindnesses and failures compound against those of others, it sounds like a recipe for unhappiness to me. But if experience machines could really help us to become more the kind of people we wanted to be, perhaps other people would be less hellish after we got the hang of self improvement?
I like this comment, however I think this is technically false:
I think most people agree about the importance of “the substrate universe” whether that universe is this one, or actually higher than our own.
I think most people don’t have an opinion about this, and don’t know what “substrate” means. But then, “most people” is a bit hard to nail down in common usage.
I think you missed the bit where the machine gives you a version of your life that’s provably the best you could experience. If that includes NASA and vast libraries then you get those.
I think most people agree about the importance of “the substrate universe” whether that universe is this one, or actually higher than our own. But suppose the we argued against a more compelling reconstruction of the proposal by modifying the experience machine in various ways? The original post did the opposite of course—removing the off button in a gratuitous way that highlights the loss (rather than extension) of autonomy. Maybe if we repair the experience box too much it stops functioning as the same puzzle, but I don’t see how an obviously broken box is that helpful an intuition pump.
For example, rather than just giving me plain old physics inside the machine, the Matrix experience of those who knew they were in the matrix seemed nice: astonishing physical grace, the ability to fly and walk on walls, and access to tools and environments of one’s choosing. Then you could graft on the good parts from Diaspora so going into the box automatically comes with effective immortality, faster subjective thinking processes, real time access to all the digitally accessible data of human civilization, and the ability to examine and cautiously optimize the algorithms of one’s own mind using an “exoself” to adjust your “endoself” (so that you could, for example, edit addictions out of your psychological makeup except when you wanted to go on a “psychosis vacation”).
And I’d also want to have a say in how human civilization progressed. If there were environmental/astronomical catastrophes I’d want to make sure they were either prevented or at least that people’s simulators were safely evacuated. If we could build the kinds of simulators I’m talking about then people in simulators could probably build and teleoperate all kinds of neat machinery for emergencies, repair of the experience machines, space exploration, and so on.
Another argument against experience machines is sometimes that they wouldn’t be as “challenging” as the real world because you’d be in a “merely man made” world… but the proper response is simply to augment the machine so that it offers more challenges and more meaningful challenges than mere reality—for example, the environments you could call up to give you arbitrary levels of challenge might be calibrated to be “slightly beyond your abilities about 50% of the time but always educational and fun”.
Spending time in one of these improved experience machines would be way better than, say, spending the equivalent time in college, because mere college graduates would pale in comparison to people who’d spent the same four years gaining subjective centuries of hands on experience dealing with issues whose “challenge modes” were vastly more complex puzzles than most of the learning opportunities on our boring planet. Even for equivalent subjective time, I think the experience machines would be better, because they’d be calibrated precisely to the person with no worries about educational economies of scale… instead of lectures, conversations… instead of case studies, simulations… and so on.
The only intelligible arguments against the original “straw man” experience machine (though perhaps there are others I’m not clever enough to notice) that remain compelling to me after repairing the design of the machine, are focused on social relationships.
First, one of the greatest challenges in the human environment is other humans. If you’re setting up an experience machine scenario with a sliding scale of challenge, where to you get the characters from? Do you just “fabricate” the facade of a someone who presents a particular kind of coordination challenge due to their difficult personal quirks? If you’re going to simulate conflict, do you just “fabricate” enemies? And hurt them? Where do all these people come from and what is the moral significance of their existence? Not being distressed by this is probably a character defect, but the alternative seems to involve inevitable distress.
And then on the other side of the coin, there are many people who I love as friends or family, even though they are not physically gorgeous, fully self actualized, passionately moral, polymath “greek gods”. Which is probably a lucky thing, because neither am I :-P
But if they refused to enter repaired experience machines (networked, of course, so we could hang out anytime we wanted) the only way I could interact with them would be through an avatar in the substrate world where they were plodding along without the same growth opportunities. Would I eventually see them as grossly incapacitated caricatures of what humans are truly capable of? How much distress would that cause? Or suppose they opted in and then got vastly more out of their experience machine than I got out of mine? Would I feel inferior? Would I need to be protected from the awareness of my inferiority for my own good? Would they feel sorry for me? Would they need to be protected from my disappointing-ness? Would we all just drift apart, putting “facade interfaces” between each other, so everyone’s understanding of other people drifted farther and farther out of calibration—me appearing better than actual to them and them worse than actual to me?
And then if something in the external universe supporting our experience machines posed a real challenge that involves actual choices we’re back to the political challenges around coordinating with other people where the stakes are authentic and substantial. We’d probably debate from inside the experience boxes about what the world manipulation machines should do, and the arguments would inevitably carry some measure of distress for any “losing factions”.
It is precisely the existence of morally significant “non-me entities” that creates challenges that I don’t see how to avoid under any variety of experience machine. It’s not that I particularly care whether my desk is real or not—its that I care that my family is real.
Given the state of human technology, one could argue that human civilization (especially in the developed world, and hopefully for everyone within a few decades) is already in something reasonably close to an optimal experience machine. We have video games. We have reasonable material comfort. We have raw NASA data online. We can cross our fingers and somewhat reasonably imagine technology improving medical care to cure death and stupidity… But the thing we may never have a solution to is the existence of people we care about, who are not exactly as they would be if their primary concern was our own happiness, while recognizing we are constrained in similar ways, especially when we care about multiple people who want different things for us.
Perhaps this is where we cue Sartre’s version of a “three body problem”?
Unless… what if much of the challenges in politics and social interactions happen because people in general are so defective? If my blindnesses and failures compound against those of others, it sounds like a recipe for unhappiness to me. But if experience machines could really help us to become more the kind of people we wanted to be, perhaps other people would be less hellish after we got the hang of self improvement?
I like this comment, however I think this is technically false:
I think most people don’t have an opinion about this, and don’t know what “substrate” means. But then, “most people” is a bit hard to nail down in common usage.
I think it’s useful to quantify over “people who know what the question would mean” in most cases.
Thinking through some test cases, I think you’re probably right.
I think you missed the bit where the machine gives you a version of your life that’s provably the best you could experience. If that includes NASA and vast libraries then you get those.