Sorry for not replying earlier, I have lots of stuff going on right now.
So basically you are describing a scenario where humanity goes digital via WBE and lives inside a utopian virtual reality (either entirely or most of the time) thus solving all or most of our current problems without the need to create FAI and risk utter destruction in the process. Not a bad question to have asked as far as I am concerned—it is a scenario I considered as well.
However, there are many good objections to this virtual reality idea, but I’ll just name the most obvious that comes to mind:
If anyone but a FAI is running this utopia, it seems very plausible that it would result in a dystopia, or at best something vaguely resembling our current world. Without some kind of superior intelligence monitoring the virtual reality, mere humans would have to make decisions about how Utopiaworld is run. This human fiddling makes it vulnerable to all kinds of human stupidities. Now, if the timeline this “solution” is supposed to work approaches near-infinity, then at some point collapse due to human error seems inevitable. Moreover, a virtual world opens up possibilities (and problems!) we can hardly imagine right now—possibilities and problems vast enough, that a human mind may simply not be adequate in order to solve and control them.
So unless you are willing to change the human mind in order to adapt it to a utopian virtual reality, it seems to be no viable option. Why mandatory brain-rewiring before uploading a human mind into a virtual reality run by human-like agents would be grotesque I won’t elaborate further.
I guess what my objection really comes down to is that humans are stupid, and thus we cannot be trusted to do anything right in the long run. Or at least we certainly can’t do things right enough to build a lasting and well-operating utopia. For that to work we would need extensive brain enhancements and if you go down this path, I think you run into the exact same problems I described already:
We humans are self-obsessed shits and bolting on a rationality module won’t change that. It would make us better at getting the things we want, but the things humans seem to want most of the time (judged by our current actions) are not working utopias but things typical for self-obsessed evolved agents: Power, social status and sex. Could this change by virtue of becoming more rational and intelligent? There’s a good chance it would. People would certainly think more (and more efficiently) about the future once they are smarter, but if you make humans sufficiently smart to produce something like a virtual utopia, you have already taken the first step to creating the uFAI you hoped to avoid in the first place. Humans would want to become even smarter than they (and all those other enhanced humans) already are, thus starting a race to the top. The result may be something terrible that has been endlessly envisioned throughout the ages: Powerful gods with the petty goals of humans.
So my current conclusion after thinking about the virtual reality scenario is that it’s not a viable long-term solution for humanity. That is not to say that I’ve extensively studied and thought about this option, but I think the objection I detailed is pretty convincing.
In a sentence: We’re not smart enough to directly construct a utopia (whether real or virtual) and if we actually made ourselves smart enough to do that, we’d probably raise the odds for uFAI. So risk-wise we would be worse off than trying to build a FAI from scratch, which would be more competent at solving our problems than humans could ever be (whether intelligence-enhanced or not).
Also, I realize that a virtual reality is a great way to absolve humanity of resource scarcity and restricted bodily abilities/requirements, but virtually all other problems (moral, social etc.) are left rather untouched by the virtual reality solution. A virtual reality is not automatically a utopia. In a way it would be like living in a world where everyone is immortal and filthy rich—everyone has huge power over material (or as in our case virtual) “things”, yet this doesn’t solve all your social or personal problems.
Thank you for taking the time to write out such a thoughtful reply. I will be taking the time to return the favor shortly.
EDIT: Here’s the (long) reply:
If anyone but a FAI is running this utopia
What does it mean to run a utopia? In order to run something, one must make decisions. What sort of decisions would this person or FAI be making? I realize it’s hard to predict what exact scenario it would be like, but we can speculate all we like (and then see which one’s sound most realistic/probable). Also, who said that anyONE would be “running” this virtual reality? It could be democratic.
Also, who said that utopias have to be with other real people/uploads rather than with non-conscious programs (like in a video game or a dream)? I can understand people’s desire to be in “reality” with “real people”. But this wouldn’t be necessary for video-game type virtual reality.
Now, if the timeline this “solution” is supposed to work approaches near-infinity, then at some point collapse due to human error seems inevitable
I think it is probably less inevitable than the heat-death of the universe. I think that the transition to hardware would permit people to survive on substrates that could exist in space with no need to inhabit a planet. There is no longer a need for food, only electricity (which would be easily available in the form of solar energy). Spreading this substrate in every direction reduce the risk of the collapse of “society”.
Why mandatory brain-rewiring before uploading a human mind into a virtual reality run by human-like agents would be grotesque I won’t elaborate further.
It won’t necessarily be mandatory to rewire oneself to be smarter, kinder, more emotionally stable. But, if one had the petty desires for power, sex, and status (as you claim), then they would willingly choose to rewire themselves (or risk being left behind).
We humans are self-obsessed shits and bolting on a rationality module won’t change that. It would make us better at getting the things we want, but the things humans seem to want most of the time (judged by our current actions) are not working utopias but things typical for self-obsessed evolved agents: Power, social status and sex. Could this change by virtue of becoming more rational and intelligent? There’s a good chance it would
Today, power is measured in terms of wealth or influence. Wealth seems like it would cease to be a relevant factor as economics is dependent on scarcity (have you ever had to buy air?) and in an age in which everything is digital, the only limitation is computational capacity.
Although this is hardly certain, I hypothesize that (“actual”) sex would cease to be a relevant motivator of uploads. Sex in a virtual reality would be free, clean, and offer the user the ability to simulate situations that wouldn’t be available to them in real life.
Status is usually sought today in order to have sex (see above) and by means of acquiring wealth (see above).
Personally, I believe that once we become uploads, the chemical imbalances and irrational beliefs that drive our behavior (for evolutionary purposes) will dissipate and we will be infinitely happier than we have ever been.
Powerful gods with the petty goals of humans.
Agreed that it is frightening. Nice way of putting it.
So risk-wise we would be worse off than trying to build a FAI from scratch, which would be more competent at solving our problems than humans could ever be (whether intelligence-enhanced or not).
This is the key question. What is riskier? I acknowledge that P(utopia|FAI+WBE) > P (utopia|WBE). But, I don’t acknowledge that P(utopia|AGI+WBE) > P (utopia|WBE).
Also, I realize that a virtual reality is a great way to absolve humanity of resource scarcity and restricted bodily abilities/requirements, but virtually all other problems (moral, social etc.) are left rather untouched by the virtual reality solution
I believe these problems are caused by scarcity problems (scarcity of intelligence, money, access to quality education). And as I’ve pointed out earlier, I think that seeking sex, power, and status will disappear.
A virtual reality is not automatically a utopia. In a way it would be like living in a world where everyone is immortal and filthy rich—everyone has huge power over material (or as in our case virtual) “things”, yet this doesn’t solve all your social or personal problems.
Personal problems are caused by unhappiness, craving, addiction, etc. These can all be traced back to brain states. These brain states could be “fixed” (voluntarily) by altering the digital settings of the digital neurochemical levels. (though I believe that we will have a much better idea of how to alter the brain than simply altering chemical levels. The current paradigm in neuroscience has a hammer (drugs) and so it tends to look at all the problems as nails).
Sorry for not replying earlier, I have lots of stuff going on right now.
So basically you are describing a scenario where humanity goes digital via WBE and lives inside a utopian virtual reality (either entirely or most of the time) thus solving all or most of our current problems without the need to create FAI and risk utter destruction in the process. Not a bad question to have asked as far as I am concerned—it is a scenario I considered as well.
However, there are many good objections to this virtual reality idea, but I’ll just name the most obvious that comes to mind:
If anyone but a FAI is running this utopia, it seems very plausible that it would result in a dystopia, or at best something vaguely resembling our current world. Without some kind of superior intelligence monitoring the virtual reality, mere humans would have to make decisions about how Utopiaworld is run. This human fiddling makes it vulnerable to all kinds of human stupidities. Now, if the timeline this “solution” is supposed to work approaches near-infinity, then at some point collapse due to human error seems inevitable. Moreover, a virtual world opens up possibilities (and problems!) we can hardly imagine right now—possibilities and problems vast enough, that a human mind may simply not be adequate in order to solve and control them.
So unless you are willing to change the human mind in order to adapt it to a utopian virtual reality, it seems to be no viable option. Why mandatory brain-rewiring before uploading a human mind into a virtual reality run by human-like agents would be grotesque I won’t elaborate further.
I guess what my objection really comes down to is that humans are stupid, and thus we cannot be trusted to do anything right in the long run. Or at least we certainly can’t do things right enough to build a lasting and well-operating utopia. For that to work we would need extensive brain enhancements and if you go down this path, I think you run into the exact same problems I described already:
We humans are self-obsessed shits and bolting on a rationality module won’t change that. It would make us better at getting the things we want, but the things humans seem to want most of the time (judged by our current actions) are not working utopias but things typical for self-obsessed evolved agents: Power, social status and sex. Could this change by virtue of becoming more rational and intelligent? There’s a good chance it would. People would certainly think more (and more efficiently) about the future once they are smarter, but if you make humans sufficiently smart to produce something like a virtual utopia, you have already taken the first step to creating the uFAI you hoped to avoid in the first place. Humans would want to become even smarter than they (and all those other enhanced humans) already are, thus starting a race to the top. The result may be something terrible that has been endlessly envisioned throughout the ages: Powerful gods with the petty goals of humans.
So my current conclusion after thinking about the virtual reality scenario is that it’s not a viable long-term solution for humanity. That is not to say that I’ve extensively studied and thought about this option, but I think the objection I detailed is pretty convincing.
In a sentence: We’re not smart enough to directly construct a utopia (whether real or virtual) and if we actually made ourselves smart enough to do that, we’d probably raise the odds for uFAI. So risk-wise we would be worse off than trying to build a FAI from scratch, which would be more competent at solving our problems than humans could ever be (whether intelligence-enhanced or not).
Also, I realize that a virtual reality is a great way to absolve humanity of resource scarcity and restricted bodily abilities/requirements, but virtually all other problems (moral, social etc.) are left rather untouched by the virtual reality solution. A virtual reality is not automatically a utopia. In a way it would be like living in a world where everyone is immortal and filthy rich—everyone has huge power over material (or as in our case virtual) “things”, yet this doesn’t solve all your social or personal problems.
Thank you for taking the time to write out such a thoughtful reply. I will be taking the time to return the favor shortly.
EDIT: Here’s the (long) reply:
What does it mean to run a utopia? In order to run something, one must make decisions. What sort of decisions would this person or FAI be making? I realize it’s hard to predict what exact scenario it would be like, but we can speculate all we like (and then see which one’s sound most realistic/probable). Also, who said that anyONE would be “running” this virtual reality? It could be democratic. Also, who said that utopias have to be with other real people/uploads rather than with non-conscious programs (like in a video game or a dream)? I can understand people’s desire to be in “reality” with “real people”. But this wouldn’t be necessary for video-game type virtual reality.
I think it is probably less inevitable than the heat-death of the universe. I think that the transition to hardware would permit people to survive on substrates that could exist in space with no need to inhabit a planet. There is no longer a need for food, only electricity (which would be easily available in the form of solar energy). Spreading this substrate in every direction reduce the risk of the collapse of “society”.
It won’t necessarily be mandatory to rewire oneself to be smarter, kinder, more emotionally stable. But, if one had the petty desires for power, sex, and status (as you claim), then they would willingly choose to rewire themselves (or risk being left behind).
Today, power is measured in terms of wealth or influence. Wealth seems like it would cease to be a relevant factor as economics is dependent on scarcity (have you ever had to buy air?) and in an age in which everything is digital, the only limitation is computational capacity.
Although this is hardly certain, I hypothesize that (“actual”) sex would cease to be a relevant motivator of uploads. Sex in a virtual reality would be free, clean, and offer the user the ability to simulate situations that wouldn’t be available to them in real life.
Status is usually sought today in order to have sex (see above) and by means of acquiring wealth (see above).
Personally, I believe that once we become uploads, the chemical imbalances and irrational beliefs that drive our behavior (for evolutionary purposes) will dissipate and we will be infinitely happier than we have ever been.
Agreed that it is frightening. Nice way of putting it.
This is the key question. What is riskier? I acknowledge that P(utopia|FAI+WBE) > P (utopia|WBE). But, I don’t acknowledge that P(utopia|AGI+WBE) > P (utopia|WBE).
I believe these problems are caused by scarcity problems (scarcity of intelligence, money, access to quality education). And as I’ve pointed out earlier, I think that seeking sex, power, and status will disappear.
Personal problems are caused by unhappiness, craving, addiction, etc. These can all be traced back to brain states. These brain states could be “fixed” (voluntarily) by altering the digital settings of the digital neurochemical levels. (though I believe that we will have a much better idea of how to alter the brain than simply altering chemical levels. The current paradigm in neuroscience has a hammer (drugs) and so it tends to look at all the problems as nails).