Related: what different conceptions of ‘simulation’ are we using that make Eliezer’s statement coherent to him, but incoherent to me? Possible conceptions in order of increasing ‘reality’:
(i) the simulation just stimulates your ‘have been interacting with people’ neurons, so that you have a sense of this need being fulfilled with no memories of how it was fulfilled.
(ii) the simulation simulates interaction with people, so that you feel as though you’ve interacted with people and have full memories and most outcomes (e.g., increased knowledge and empathy, etc.) of having done so
(iii) the simulation simulates real people—so that you really have interacted with “real people”, just you’ve done so inside the simulation
(iv) reality is a simulation—depending on your concept of simulation, the deterministic evolution/actualization of reality in space-time is one
ii is a problem, iii fits my values but may violate other sentients’ rights, and as for iv, I see no difference between the concepts of “computer program” and “universe” except that a computer program has an output.
So when you write that you need interaction with real people, you were thinking of (i) or (ii)? I think (ii) or (iii), but only not (ii) if there is any objective coherent difference.
I, personally, tell the difference by paying attention to and observing reality without making any judgments. Then, I compare that with my expectations based on my judgments. If there is a difference, then I am thinking I am interacting instead of interacting.
Over time, I stop making judgments. And in essence, I stop thinking about interacting with the world, and just interact, and see what happens.
The less judgments I make, the more difficult the Turing Test becomes; as it is no longer about meeting my expectations, but instead satisfying my desired level of complexity.
This, by the nature of real-world interaction, is a complicated set of interacting chaotic equations; And each time I remove a judgment from my repertoire, the equation gains a level of complexity, gains another strange attractor to interact with.
At a certain point of complexity, the equation becomes impossible except by a “god”.
Now, if an AI passes THAT Turing Test, I will consider it a real person.
I, personally, tell the difference by paying attention to and observing reality without making any judgments. Then, I compare that with my expectations based on my judgments. If there is a difference, then I am thinking I am interacting instead of interacting.
Over time, I stop making judgments. And in essence, I stop thinking about interacting with the world, and just interact, and see what happens.
I think it’d be useful to hear an example of “observing reality without making judgements” and “observing reality with making judgements”. I’m having trouble figuring out what you believe the difference to be.
How can you tell the difference?
Related: what different conceptions of ‘simulation’ are we using that make Eliezer’s statement coherent to him, but incoherent to me? Possible conceptions in order of increasing ‘reality’:
(i) the simulation just stimulates your ‘have been interacting with people’ neurons, so that you have a sense of this need being fulfilled with no memories of how it was fulfilled.
(ii) the simulation simulates interaction with people, so that you feel as though you’ve interacted with people and have full memories and most outcomes (e.g., increased knowledge and empathy, etc.) of having done so
(iii) the simulation simulates real people—so that you really have interacted with “real people”, just you’ve done so inside the simulation
(iv) reality is a simulation—depending on your concept of simulation, the deterministic evolution/actualization of reality in space-time is one
ii is a problem, iii fits my values but may violate other sentients’ rights, and as for iv, I see no difference between the concepts of “computer program” and “universe” except that a computer program has an output.
So when you write that you need interaction with real people, you were thinking of (i) or (ii)? I think (ii) or (iii), but only not (ii) if there is any objective coherent difference.
I, personally, tell the difference by paying attention to and observing reality without making any judgments. Then, I compare that with my expectations based on my judgments. If there is a difference, then I am thinking I am interacting instead of interacting.
Over time, I stop making judgments. And in essence, I stop thinking about interacting with the world, and just interact, and see what happens.
The less judgments I make, the more difficult the Turing Test becomes; as it is no longer about meeting my expectations, but instead satisfying my desired level of complexity. This, by the nature of real-world interaction, is a complicated set of interacting chaotic equations; And each time I remove a judgment from my repertoire, the equation gains a level of complexity, gains another strange attractor to interact with.
At a certain point of complexity, the equation becomes impossible except by a “god”.
Now, if an AI passes THAT Turing Test, I will consider it a real person.
I think it’d be useful to hear an example of “observing reality without making judgements” and “observing reality with making judgements”. I’m having trouble figuring out what you believe the difference to be.