I can’t help but always associate discussions of an experience machine (in whatever form it takes) to television. TV was just the alpha version of the experience machine and I hear it’s quite popular.
This is more tongue-in-cheek than a serious argument, but I do think that TV shows that people will trade pleasure or even emotional numbness (lack of pain) for authenticity.
I can’t help but always associate discussions of an experience machine (in whatever form it takes) to television. TV was just the alpha version of the experience machine and I hear it’s quite popular.
And the pre-alpha version was reading books, and the pre-pre-alpha version was daydreaming and meditation.
(I’m not trying to make a reversed slippery slope argument, I just think it’s worth looking at the similarities or differences between solitary enjoyments to get a better perspective on where our aversion to various kinds of experience machines is coming from. Many, many, many philosophers and spiritualists recommended an independent and solitary life beyond a certain level of spiritual and intellectual self-sufficiency. It is easy to imagine that an experience machine would be not much different than that, except perhaps with enhanced mental abilities and freedom from the suffering of day-to-day life—both things that can be easier to deal with in a dignified way, like terminal disease or persistent poverty, and the more insidious kinds of suffering, like always being thought creepy by the opposite sex without understanding how or why, being chained by the depression of learned helplessness without any clear way out (while friends or society model you as having magical free will but as failing to exercise it as a form of defecting against them), or, particularly devastating for the male half of the population, just the average scenario of being born with average looks and average intelligence.
And anyway, how often do humans actually interact with accurate models of each other, rather than with hastily drawn models of each other that are produced by some combination of wishful thinking and implicit and constant worries about evolutionary game theoretic equilibria? And because our self-image is a reflection of those myriad interactions between ourselves and others or society, how good of a model do we have of ourselves, even when we’re not under any obvious unwanted social pressures? Are these interactions much deeper than those that can be constructed and thus more deeply understood within our own minds when we’re free from the constant threats and expectations of persons or society? Do humans generally understand their personal friends and enemies and lovers much better than the friends and enemies and lovers they lazily watch on TV screens? Taken in combination, what do the answers to these questions imply, if not for some people then for others?)
It’s true, but it’s a very small portion of the population that lives life for the sole purpose of supporting their television-watching (or World-of-Warcraft-playing) behaviour. Yes, people come home after work and watch television, but if they didn’t have to work, the vast majority of them would not spend 14 hours a day in front of the TV.
Yes, people come home after work and watch television, but if they didn’t have to work, the vast majority of them would not spend 14 hours a day in front of the TV.
Well, that may be the case, but that only highlights the limitations of TV. If the TV was capable of fulfilling their every need—from food and shelter to self actualization, I think you’d have quite a few people who’d do nothing but sit in front of the TV.
Um… if a rock was capable of fulfilling my every need, including a need for interaction with real people, I’d probably spend a lot of time around that rock.
Well, if the simulation is that accurate (e.g. its AI passes the Turing Test, so you do think you’re interacting with real people), then wouldn’t it fulfill your every need?
Related: what different conceptions of ‘simulation’ are we using that make Eliezer’s statement coherent to him, but incoherent to me? Possible conceptions in order of increasing ‘reality’:
(i) the simulation just stimulates your ‘have been interacting with people’ neurons, so that you have a sense of this need being fulfilled with no memories of how it was fulfilled.
(ii) the simulation simulates interaction with people, so that you feel as though you’ve interacted with people and have full memories and most outcomes (e.g., increased knowledge and empathy, etc.) of having done so
(iii) the simulation simulates real people—so that you really have interacted with “real people”, just you’ve done so inside the simulation
(iv) reality is a simulation—depending on your concept of simulation, the deterministic evolution/actualization of reality in space-time is one
ii is a problem, iii fits my values but may violate other sentients’ rights, and as for iv, I see no difference between the concepts of “computer program” and “universe” except that a computer program has an output.
So when you write that you need interaction with real people, you were thinking of (i) or (ii)? I think (ii) or (iii), but only not (ii) if there is any objective coherent difference.
I, personally, tell the difference by paying attention to and observing reality without making any judgments. Then, I compare that with my expectations based on my judgments. If there is a difference, then I am thinking I am interacting instead of interacting.
Over time, I stop making judgments. And in essence, I stop thinking about interacting with the world, and just interact, and see what happens.
The less judgments I make, the more difficult the Turing Test becomes; as it is no longer about meeting my expectations, but instead satisfying my desired level of complexity.
This, by the nature of real-world interaction, is a complicated set of interacting chaotic equations; And each time I remove a judgment from my repertoire, the equation gains a level of complexity, gains another strange attractor to interact with.
At a certain point of complexity, the equation becomes impossible except by a “god”.
Now, if an AI passes THAT Turing Test, I will consider it a real person.
I, personally, tell the difference by paying attention to and observing reality without making any judgments. Then, I compare that with my expectations based on my judgments. If there is a difference, then I am thinking I am interacting instead of interacting.
Over time, I stop making judgments. And in essence, I stop thinking about interacting with the world, and just interact, and see what happens.
I think it’d be useful to hear an example of “observing reality without making judgements” and “observing reality with making judgements”. I’m having trouble figuring out what you believe the difference to be.
I can’t help thinking of the great Red Dwarf novel “Better Than Life”, whose concept is almost identical (see http://en.wikipedia.org/wiki/Better_Than_Life ). There are few key differences though: in the book, so-called “game heads” waste away in the real world like heroin addicts. Also, the game malfunctions due to one character’s self-loathing. Recommended read.
TV shows that people will trade pleasure or even emotional numbness (lack of pain) for authenticity.
In my experience most people don’t seem to worry about themselves getting emotionally young, it’s mostly far-view think-of-the-children stuff. And I’m pretty sure pleasure is a good thing, so I’m not sure in what sense they’re “trading” it (unless you mean they could be having more fun elsewhere?)
I can’t help but always associate discussions of an experience machine (in whatever form it takes) to television. TV was just the alpha version of the experience machine and I hear it’s quite popular.
This is more tongue-in-cheek than a serious argument, but I do think that TV shows that people will trade pleasure or even emotional numbness (lack of pain) for authenticity.
And the pre-alpha version was reading books, and the pre-pre-alpha version was daydreaming and meditation.
(I’m not trying to make a reversed slippery slope argument, I just think it’s worth looking at the similarities or differences between solitary enjoyments to get a better perspective on where our aversion to various kinds of experience machines is coming from. Many, many, many philosophers and spiritualists recommended an independent and solitary life beyond a certain level of spiritual and intellectual self-sufficiency. It is easy to imagine that an experience machine would be not much different than that, except perhaps with enhanced mental abilities and freedom from the suffering of day-to-day life—both things that can be easier to deal with in a dignified way, like terminal disease or persistent poverty, and the more insidious kinds of suffering, like always being thought creepy by the opposite sex without understanding how or why, being chained by the depression of learned helplessness without any clear way out (while friends or society model you as having magical free will but as failing to exercise it as a form of defecting against them), or, particularly devastating for the male half of the population, just the average scenario of being born with average looks and average intelligence.
And anyway, how often do humans actually interact with accurate models of each other, rather than with hastily drawn models of each other that are produced by some combination of wishful thinking and implicit and constant worries about evolutionary game theoretic equilibria? And because our self-image is a reflection of those myriad interactions between ourselves and others or society, how good of a model do we have of ourselves, even when we’re not under any obvious unwanted social pressures? Are these interactions much deeper than those that can be constructed and thus more deeply understood within our own minds when we’re free from the constant threats and expectations of persons or society? Do humans generally understand their personal friends and enemies and lovers much better than the friends and enemies and lovers they lazily watch on TV screens? Taken in combination, what do the answers to these questions imply, if not for some people then for others?)
It’s true, but it’s a very small portion of the population that lives life for the sole purpose of supporting their television-watching (or World-of-Warcraft-playing) behaviour. Yes, people come home after work and watch television, but if they didn’t have to work, the vast majority of them would not spend 14 hours a day in front of the TV.
Well, that may be the case, but that only highlights the limitations of TV. If the TV was capable of fulfilling their every need—from food and shelter to self actualization, I think you’d have quite a few people who’d do nothing but sit in front of the TV.
Um… if a rock was capable of fulfilling my every need, including a need for interaction with real people, I’d probably spend a lot of time around that rock.
Well, if the simulation is that accurate (e.g. its AI passes the Turing Test, so you do think you’re interacting with real people), then wouldn’t it fulfill your every need?
I have a need to interact with real people, not to think I’m interacting with real people.
How can you tell the difference?
Related: what different conceptions of ‘simulation’ are we using that make Eliezer’s statement coherent to him, but incoherent to me? Possible conceptions in order of increasing ‘reality’:
(i) the simulation just stimulates your ‘have been interacting with people’ neurons, so that you have a sense of this need being fulfilled with no memories of how it was fulfilled.
(ii) the simulation simulates interaction with people, so that you feel as though you’ve interacted with people and have full memories and most outcomes (e.g., increased knowledge and empathy, etc.) of having done so
(iii) the simulation simulates real people—so that you really have interacted with “real people”, just you’ve done so inside the simulation
(iv) reality is a simulation—depending on your concept of simulation, the deterministic evolution/actualization of reality in space-time is one
ii is a problem, iii fits my values but may violate other sentients’ rights, and as for iv, I see no difference between the concepts of “computer program” and “universe” except that a computer program has an output.
So when you write that you need interaction with real people, you were thinking of (i) or (ii)? I think (ii) or (iii), but only not (ii) if there is any objective coherent difference.
I, personally, tell the difference by paying attention to and observing reality without making any judgments. Then, I compare that with my expectations based on my judgments. If there is a difference, then I am thinking I am interacting instead of interacting.
Over time, I stop making judgments. And in essence, I stop thinking about interacting with the world, and just interact, and see what happens.
The less judgments I make, the more difficult the Turing Test becomes; as it is no longer about meeting my expectations, but instead satisfying my desired level of complexity. This, by the nature of real-world interaction, is a complicated set of interacting chaotic equations; And each time I remove a judgment from my repertoire, the equation gains a level of complexity, gains another strange attractor to interact with.
At a certain point of complexity, the equation becomes impossible except by a “god”.
Now, if an AI passes THAT Turing Test, I will consider it a real person.
I think it’d be useful to hear an example of “observing reality without making judgements” and “observing reality with making judgements”. I’m having trouble figuring out what you believe the difference to be.
Assuming it can provide self-actualization is pretty much assuming the contended issue away.
I can’t help thinking of the great Red Dwarf novel “Better Than Life”, whose concept is almost identical (see http://en.wikipedia.org/wiki/Better_Than_Life ). There are few key differences though: in the book, so-called “game heads” waste away in the real world like heroin addicts. Also, the game malfunctions due to one character’s self-loathing. Recommended read.
In my experience most people don’t seem to worry about themselves getting emotionally young, it’s mostly far-view think-of-the-children stuff. And I’m pretty sure pleasure is a good thing, so I’m not sure in what sense they’re “trading” it (unless you mean they could be having more fun elsewhere?)