If computation isn’t the real thing, only the movement is, then a simulation (which is the complete representation of a thing using a different movement which can however be seen as performing the same computation) is not the thing itself, and you have no reason to believe that the phenomenon of consciousness can be internally experienced in a computer simulation, that an algorithm can feel anything from the inside. Because the “inside” and the “outside” are themselves just labels we use.
and if they’d simply notice that the question is not about reality, but their categorization of arbitrarily-chosen chunks of reality, they’d stop being confused and arguing nonsense.
The question of qualia and subjective experience isn’t a mere “confusion”.
If computation isn’t the real thing, only the movement is, then a simulation (which is the complete representation of a thing using a different movement which can however be seen as performing the same computation) is not the thing itself
You keep using that word “is”, but I don’t think it means what you think it means. ;-)
Try making your beliefs pay rent: what differences do you expect to observe in reality, between different states of this “is”?
That is, what different predictions will you make, based on “is” or “is not” in your statement?
Consider that one carefully, before you continue.
The question of qualia and subjective experience isn’t a mere “confusion”.
Really? Would you care to explain what differences you predict to see in the world, as a result of the existence or non-existence of these concepts?
I don’t see that we have need of such convoluted hypotheses, when the simpler explanation is merely that our neural architecture more closely resembles Eliezer’s Network B, than Network A… which is a very modest hypothesis indeed, since Network B has many evolutionary advantages compared to Network A.
Try making your beliefs pay rent, what differences do you expect to observe in reality, between different states of this “is”?
.
Would you care to explain what differences you predict to see in the world, as a result of the existence or non-existence of these concepts?
Sure. Here’s two simple ones:
If consciousness isn’t just computation, then I don’t expect to ever observe waking up as a simulation in a computer.
If consciousness isn’t just computation, then I don’t expect to ever see evolved or self-improved (not intentionally designed to be similar to humans) electronic entities discussing between themselves about the nature of qualia and subjective inner experience.
Consider that one carefully, before you continue.
You’ve severely underestimated my rationality if all this time you thought I hadn’t even considered the question before I started my participation in this thread.
Try making your beliefs pay rent, what differences do you expect to observe in reality, between different states of this “is”?
.
That doesn’t look like a reply, there.
Sure. Here’s two simple ones:
If consciousness isn’t just computation, then I don’t expect to ever observe waking up as a simulation in a computer.
If consciousness isn’t just computation, then I don’t expect to ever see evolved or self-improved (not intentionally designed to be similar to humans) electronic entities discussing between themselves about the nature of qualia and subjective inner experience.
And if consciousness “is” just computation, what would be different? Do you have any particular reason to think you would observe any of those things?
You’ve severely underestimated my rationality if all this time you thought I hadn’t even considered the question before I started my participation in this thread.
You missed the point of that comment entirely, as can be seen by you moving the quotation away from its referent. The question to consider was what the meaning of “is” was, in the other statement you made. (It actually makes a great deal of difference, and it’s that difference that makes the rest of your argument .)
Since the reply was just below both of your quotes, then no, the single dot wasn’t one, it was an attempt to distinguish the two quotes.
I have to estimate the probability of you purposefully trying to make me look as if I intentionally avoided answering your question, while knowing I didn’t do so.
Like your earlier “funny” response about how I supposedly favoured euthanizing paraplegics, you don’t give me the vibe of responding in good faith.
Do you have any particular reason to think you would observe any of those things?
Of course. If consciousness is computation, then I expect that if my mind’s computation is simulated in a Turing machine, half the times my next experience will be of me inside the machine. By repeating the experiment enough times, I’d accumulate enough evidence that I’d no longer expect my subjective experience to ever find itself inside an electronic computation.
And if evolution stumbled upon consciousness by accident, and it’s solely dependent on some computational internal-to-the-algorithm component, then an evolution of mere algorithms in a Turing machine, should also be eventually expected to stumble upon consciousness and produce similar discussions about consciousness once it reaches the point of simulating minds of sufficient complexity.
The question to consider was what the meaning of “is” was, in the other statement you made.
Can you make a complete question? What exactly are you asking? The statement you quoted had more than one “is” in it. Four or five of them.
I think we’re done here. As far as I can tell, you’re far more interested in how you appear to other people than actually understanding anything, or at any rate questioning anything. I didn’t ask you questions to get information from you, I asked you questions to help you dissolve your confusion.
In any event, you haven’t grokked the “usage of words” sequence sufficiently to have a meaningful discussion on this topic. So, I’m going to stop trying now.
You didn’t expect me to have actual answers to your questions, and you think that my having answers indicates a problem with my side of the discussion; instead of perhaps updating your probabilities to think that I wasn’t the one confused, perhaps you were.
I certainly am interested in understanding things, and questioning things. That’s why I asked questions to you, which you still haven’t answered:
what do you mean when you say that physics is a machine? (How would the world be different if physics wasn’t a machine?)
what do you mean when you call “computation” a meaningless concept outside its practical utility? (What concept is there that is meaningful outside its practical utility?)
As I answered your questions, I think you should do me the reciprocal courtesy of answering these two.
For a thorough answer to your first question, study the sequences—especially the parts debunking the supernatural, explaining the “merely real”, and the basics of quantum mechanics.
For the second, I mean only that asking whether something “is” a computation or not is a pointless question… as described in “How an Algorithm Feels From The Inside”.
Thanks for the suggestion, but I’ve read them all. It seems to me you are perhaps talking about reductionism, which admittedly is a related issue, but even reductionists don’t need to believe that the simulation of a thing equals the thing simulated.
I do wonder if you’ve read http://lesswrong.com/lw/qr/timeless_causality/ . If Eliezer himself is holding onto the concept of “computation” (and “anticipation” too), what makes you think that any of the other sequences he wrote dissolves that term?
Thanks for the suggestion, but I’ve read them all.
Well, that won’t do any good unless you also apply them to the topic at hand.
even reductionists don’t need to believe that the simulation of a thing equals the thing simulated.
That depends entirely on what you mean by the words… which you haven’t actually defined, as far as I can tell.
You also seem to think I’m arguing some particular position about consciousness or the simulability thereof, but that isn’t actually so. I am only attempting to dispel confusion, and that’s a very different thing.
I’ve been saying only that someone who claims that there is some mysterious thing that prevents consciousness from being simulated, is going to have to reduce a coherent definition of both “simulate” and “consciousness” in order to be able to say something that isn’t nonsensical, because both of those notions are tied too strongly to inbuilt biases and intuitions.
That is, anything you try to say about this subject without a proper reduction is almost bound to be confused rubbish, sprinkled with repeated instances of the mind projection fallacy.
If Eliezer himself is holding onto the concept of “computation”
I rather doubt it, since that article says:
Such causal links could be required for “computation” and “consciousness”—whatever those are.
AFAICT, the article is silent on these points, having nothing in particular to say about such vague concepts… in much the same way that Eliezer leaves open the future definition of a “non-person predicate”.
Of course. If consciousness is computation, then I expect that if my mind’s computation is simulated in a Turing machine, half the times my next experience will be of me inside the machine.
With a little ingenuity, and as long we’re prepared to tolerate ridiculously impractical thought experiments, we could think up a scenario where more and more of your brain’s computational activity is delegated to a computer until the computer is doing all of the work. It doesn’t seem plausible that this would somehow cause your conscious experience to progressively fade away without you noticing.
Then we could imagine repeatedly switching the input/output connections of the simulated brain between your actual body and an ‘avatar’ in a simulated world. It doesn’t seem plausible that this would cause your conscious experience to keep switching on and off without you noticing.
The linked essay is a bit long for me to read right now, but I promise to do so within the weekend.
As to your particular example, the problem is I can also think an even more ridiculously impractical thought experiment: one in which more and more of that computer’s computational activity is in turn delegated to a group of abacus-using monks—and then it doesn’t seem plausible for my conscious experience to keep on persisting, when the monks end up being the ones doing all the work...
It’s the bullet I’m not yet prepared to bite—but if do end up doing so, despite all my intuition telling me no, that’ll be the point where I’ll also have to believe Tegmark IV. P(Tegmark IV|consciousness can persist in the manipulations of abacci)~=99% for me...
If computation isn’t the real thing, only the movement is, then a simulation (which is the complete representation of a thing using a different movement which can however be seen as performing the same computation) is not the thing itself, and you have no reason to believe that the phenomenon of consciousness can be internally experienced in a computer simulation, that an algorithm can feel anything from the inside. Because the “inside” and the “outside” are themselves just labels we use.
The question of qualia and subjective experience isn’t a mere “confusion”.
You keep using that word “is”, but I don’t think it means what you think it means. ;-)
Try making your beliefs pay rent: what differences do you expect to observe in reality, between different states of this “is”?
That is, what different predictions will you make, based on “is” or “is not” in your statement?
Consider that one carefully, before you continue.
Really? Would you care to explain what differences you predict to see in the world, as a result of the existence or non-existence of these concepts?
I don’t see that we have need of such convoluted hypotheses, when the simpler explanation is merely that our neural architecture more closely resembles Eliezer’s Network B, than Network A… which is a very modest hypothesis indeed, since Network B has many evolutionary advantages compared to Network A.
.
Sure. Here’s two simple ones:
If consciousness isn’t just computation, then I don’t expect to ever observe waking up as a simulation in a computer.
If consciousness isn’t just computation, then I don’t expect to ever see evolved or self-improved (not intentionally designed to be similar to humans) electronic entities discussing between themselves about the nature of qualia and subjective inner experience.
You’ve severely underestimated my rationality if all this time you thought I hadn’t even considered the question before I started my participation in this thread.
That doesn’t look like a reply, there.
And if consciousness “is” just computation, what would be different? Do you have any particular reason to think you would observe any of those things?
You missed the point of that comment entirely, as can be seen by you moving the quotation away from its referent. The question to consider was what the meaning of “is” was, in the other statement you made. (It actually makes a great deal of difference, and it’s that difference that makes the rest of your argument .)
Since the reply was just below both of your quotes, then no, the single dot wasn’t one, it was an attempt to distinguish the two quotes.
I have to estimate the probability of you purposefully trying to make me look as if I intentionally avoided answering your question, while knowing I didn’t do so.
Like your earlier “funny” response about how I supposedly favoured euthanizing paraplegics, you don’t give me the vibe of responding in good faith.
Of course. If consciousness is computation, then I expect that if my mind’s computation is simulated in a Turing machine, half the times my next experience will be of me inside the machine. By repeating the experiment enough times, I’d accumulate enough evidence that I’d no longer expect my subjective experience to ever find itself inside an electronic computation.
And if evolution stumbled upon consciousness by accident, and it’s solely dependent on some computational internal-to-the-algorithm component, then an evolution of mere algorithms in a Turing machine, should also be eventually expected to stumble upon consciousness and produce similar discussions about consciousness once it reaches the point of simulating minds of sufficient complexity.
Can you make a complete question? What exactly are you asking? The statement you quoted had more than one “is” in it. Four or five of them.
I think we’re done here. As far as I can tell, you’re far more interested in how you appear to other people than actually understanding anything, or at any rate questioning anything. I didn’t ask you questions to get information from you, I asked you questions to help you dissolve your confusion.
In any event, you haven’t grokked the “usage of words” sequence sufficiently to have a meaningful discussion on this topic. So, I’m going to stop trying now.
You didn’t expect me to have actual answers to your questions, and you think that my having answers indicates a problem with my side of the discussion; instead of perhaps updating your probabilities to think that I wasn’t the one confused, perhaps you were.
I certainly am interested in understanding things, and questioning things. That’s why I asked questions to you, which you still haven’t answered:
what do you mean when you say that physics is a machine? (How would the world be different if physics wasn’t a machine?)
what do you mean when you call “computation” a meaningless concept outside its practical utility? (What concept is there that is meaningful outside its practical utility?)
As I answered your questions, I think you should do me the reciprocal courtesy of answering these two.
For a thorough answer to your first question, study the sequences—especially the parts debunking the supernatural, explaining the “merely real”, and the basics of quantum mechanics.
For the second, I mean only that asking whether something “is” a computation or not is a pointless question… as described in “How an Algorithm Feels From The Inside”.
Thanks for the suggestion, but I’ve read them all. It seems to me you are perhaps talking about reductionism, which admittedly is a related issue, but even reductionists don’t need to believe that the simulation of a thing equals the thing simulated.
I do wonder if you’ve read http://lesswrong.com/lw/qr/timeless_causality/ . If Eliezer himself is holding onto the concept of “computation” (and “anticipation” too), what makes you think that any of the other sequences he wrote dissolves that term?
Well, that won’t do any good unless you also apply them to the topic at hand.
That depends entirely on what you mean by the words… which you haven’t actually defined, as far as I can tell.
You also seem to think I’m arguing some particular position about consciousness or the simulability thereof, but that isn’t actually so. I am only attempting to dispel confusion, and that’s a very different thing.
I’ve been saying only that someone who claims that there is some mysterious thing that prevents consciousness from being simulated, is going to have to reduce a coherent definition of both “simulate” and “consciousness” in order to be able to say something that isn’t nonsensical, because both of those notions are tied too strongly to inbuilt biases and intuitions.
That is, anything you try to say about this subject without a proper reduction is almost bound to be confused rubbish, sprinkled with repeated instances of the mind projection fallacy.
I rather doubt it, since that article says:
AFAICT, the article is silent on these points, having nothing in particular to say about such vague concepts… in much the same way that Eliezer leaves open the future definition of a “non-person predicate”.
Some of the Chalmers’ ideas concerning ‘Fading and dancing qualia’ may be relevant here.
With a little ingenuity, and as long we’re prepared to tolerate ridiculously impractical thought experiments, we could think up a scenario where more and more of your brain’s computational activity is delegated to a computer until the computer is doing all of the work. It doesn’t seem plausible that this would somehow cause your conscious experience to progressively fade away without you noticing.
Then we could imagine repeatedly switching the input/output connections of the simulated brain between your actual body and an ‘avatar’ in a simulated world. It doesn’t seem plausible that this would cause your conscious experience to keep switching on and off without you noticing.
The linked essay is a bit long for me to read right now, but I promise to do so within the weekend.
As to your particular example, the problem is I can also think an even more ridiculously impractical thought experiment: one in which more and more of that computer’s computational activity is in turn delegated to a group of abacus-using monks—and then it doesn’t seem plausible for my conscious experience to keep on persisting, when the monks end up being the ones doing all the work...
It’s the bullet I’m not yet prepared to bite—but if do end up doing so, despite all my intuition telling me no, that’ll be the point where I’ll also have to believe Tegmark IV. P(Tegmark IV|consciousness can persist in the manipulations of abacci)~=99% for me...