It’s true, but it’s a very small portion of the population that lives life for the sole purpose of supporting their television-watching (or World-of-Warcraft-playing) behaviour. Yes, people come home after work and watch television, but if they didn’t have to work, the vast majority of them would not spend 14 hours a day in front of the TV.
Yes, people come home after work and watch television, but if they didn’t have to work, the vast majority of them would not spend 14 hours a day in front of the TV.
Well, that may be the case, but that only highlights the limitations of TV. If the TV was capable of fulfilling their every need—from food and shelter to self actualization, I think you’d have quite a few people who’d do nothing but sit in front of the TV.
Um… if a rock was capable of fulfilling my every need, including a need for interaction with real people, I’d probably spend a lot of time around that rock.
Well, if the simulation is that accurate (e.g. its AI passes the Turing Test, so you do think you’re interacting with real people), then wouldn’t it fulfill your every need?
Related: what different conceptions of ‘simulation’ are we using that make Eliezer’s statement coherent to him, but incoherent to me? Possible conceptions in order of increasing ‘reality’:
(i) the simulation just stimulates your ‘have been interacting with people’ neurons, so that you have a sense of this need being fulfilled with no memories of how it was fulfilled.
(ii) the simulation simulates interaction with people, so that you feel as though you’ve interacted with people and have full memories and most outcomes (e.g., increased knowledge and empathy, etc.) of having done so
(iii) the simulation simulates real people—so that you really have interacted with “real people”, just you’ve done so inside the simulation
(iv) reality is a simulation—depending on your concept of simulation, the deterministic evolution/actualization of reality in space-time is one
ii is a problem, iii fits my values but may violate other sentients’ rights, and as for iv, I see no difference between the concepts of “computer program” and “universe” except that a computer program has an output.
So when you write that you need interaction with real people, you were thinking of (i) or (ii)? I think (ii) or (iii), but only not (ii) if there is any objective coherent difference.
I, personally, tell the difference by paying attention to and observing reality without making any judgments. Then, I compare that with my expectations based on my judgments. If there is a difference, then I am thinking I am interacting instead of interacting.
Over time, I stop making judgments. And in essence, I stop thinking about interacting with the world, and just interact, and see what happens.
The less judgments I make, the more difficult the Turing Test becomes; as it is no longer about meeting my expectations, but instead satisfying my desired level of complexity.
This, by the nature of real-world interaction, is a complicated set of interacting chaotic equations; And each time I remove a judgment from my repertoire, the equation gains a level of complexity, gains another strange attractor to interact with.
At a certain point of complexity, the equation becomes impossible except by a “god”.
Now, if an AI passes THAT Turing Test, I will consider it a real person.
I, personally, tell the difference by paying attention to and observing reality without making any judgments. Then, I compare that with my expectations based on my judgments. If there is a difference, then I am thinking I am interacting instead of interacting.
Over time, I stop making judgments. And in essence, I stop thinking about interacting with the world, and just interact, and see what happens.
I think it’d be useful to hear an example of “observing reality without making judgements” and “observing reality with making judgements”. I’m having trouble figuring out what you believe the difference to be.
It’s true, but it’s a very small portion of the population that lives life for the sole purpose of supporting their television-watching (or World-of-Warcraft-playing) behaviour. Yes, people come home after work and watch television, but if they didn’t have to work, the vast majority of them would not spend 14 hours a day in front of the TV.
Well, that may be the case, but that only highlights the limitations of TV. If the TV was capable of fulfilling their every need—from food and shelter to self actualization, I think you’d have quite a few people who’d do nothing but sit in front of the TV.
Um… if a rock was capable of fulfilling my every need, including a need for interaction with real people, I’d probably spend a lot of time around that rock.
Well, if the simulation is that accurate (e.g. its AI passes the Turing Test, so you do think you’re interacting with real people), then wouldn’t it fulfill your every need?
I have a need to interact with real people, not to think I’m interacting with real people.
How can you tell the difference?
Related: what different conceptions of ‘simulation’ are we using that make Eliezer’s statement coherent to him, but incoherent to me? Possible conceptions in order of increasing ‘reality’:
(i) the simulation just stimulates your ‘have been interacting with people’ neurons, so that you have a sense of this need being fulfilled with no memories of how it was fulfilled.
(ii) the simulation simulates interaction with people, so that you feel as though you’ve interacted with people and have full memories and most outcomes (e.g., increased knowledge and empathy, etc.) of having done so
(iii) the simulation simulates real people—so that you really have interacted with “real people”, just you’ve done so inside the simulation
(iv) reality is a simulation—depending on your concept of simulation, the deterministic evolution/actualization of reality in space-time is one
ii is a problem, iii fits my values but may violate other sentients’ rights, and as for iv, I see no difference between the concepts of “computer program” and “universe” except that a computer program has an output.
So when you write that you need interaction with real people, you were thinking of (i) or (ii)? I think (ii) or (iii), but only not (ii) if there is any objective coherent difference.
I, personally, tell the difference by paying attention to and observing reality without making any judgments. Then, I compare that with my expectations based on my judgments. If there is a difference, then I am thinking I am interacting instead of interacting.
Over time, I stop making judgments. And in essence, I stop thinking about interacting with the world, and just interact, and see what happens.
The less judgments I make, the more difficult the Turing Test becomes; as it is no longer about meeting my expectations, but instead satisfying my desired level of complexity. This, by the nature of real-world interaction, is a complicated set of interacting chaotic equations; And each time I remove a judgment from my repertoire, the equation gains a level of complexity, gains another strange attractor to interact with.
At a certain point of complexity, the equation becomes impossible except by a “god”.
Now, if an AI passes THAT Turing Test, I will consider it a real person.
I think it’d be useful to hear an example of “observing reality without making judgements” and “observing reality with making judgements”. I’m having trouble figuring out what you believe the difference to be.
Assuming it can provide self-actualization is pretty much assuming the contended issue away.