I’ve read the sequences and have a pretty solid grip on what the LW orthodox position is on epistemology and a number of other issues—anyone need some clarification on any points?
Could you summarise the point of/ the conclusions of the posts about second order logic and Gödel’s theorems in the Epistemology Sequence? I didn’t understand them, but I’d like to know where they were heading at least.
I don’t quite have the mathematical background and sophistication to grok those posts as well, but I did get their purpose—to hook mathematicians into thinking about the open problems that Eliezer and MIRI have identified as being relevant.
The most apt description I’ve found is something along the lines of “consciousness is what information-processing feels like from the inside.”
It’s not just about the what a brain does, because a simulated brain would still be conscious, despite not being made of neurons. It’s about certain kinds of patterns of thought (not the physical neural action, but thought as in operation performed on data). Human brains have it, insects don’t, anything in between is something for actual specialists to discuss. But what it is—the pattern of data processing—isn’t all that mysterious.
Okay but why does information processing feel like anything at all? There are cognitive processes that are information processing but you are not conscious of them.
I find it awfully suspicious that the vast majority of humans talk about experiencing consciousness. It’d be very strange if they were doing so for no reason, so I think that the human brain has some kind of pattern of thought that causes talking about consciousness.
For brevity, I call that-kind-of-thinking-that-causes-people-to-talk-about-consciousness “consciousness”.
I find it awfully suspicious that the vast majority of humans talk about experiencing consciousness. It’d be very strange if they were doing so for no reason.
And if interaction with such machines is the only ground you have for thinking that anything experiences consciousness, I think it would be reasonable to say that “consciousness” is whatever it is that makes those machines talk that way.
In practice, much of our notion of “consciousness” comes from observing our own mental workings, and I think we each have pretty good evidence that other people function quite similarly to ourselves, all of which makes that scenario unlikely to be the one we’re actually in.
How does anyone learn what the term “consciousness” applies to? So far as I can tell, it’s universally by observing human beings (who are, so far as anyone can tell, implemented almost entirely in human brains) and most specifically themselves. So it seems that if “consciousness” refers to anything at all, it refers to something human brains—or at least human beings—have. (I would say the same thing about “intelligence” and “humanity” and “personhood”.)
I suppose it’s just barely possible that, e.g., someone might find good evidence that many human beings are actually some kind of puppets controlled from outside the Matrix. In that case we might want to say that some human brains have consciousness but not all. This seems improbable enough—it seems on a par with discovering that we’re in a simulation where the electrical conductivity of copper emerges naturally from the underlying laws, while the electrical conductivity of iron is hacked in case by case by experimenters who are deliberately misleading us about what the laws are—that I feel perfectly comfortable ignoring the possibility until some actual evidence comes along.
Occam’s Razor. All these people seem similar to me in so many ways, they’re probably similar in this way too, especially if they all say that they are.
The little box that claims it experiences consciousness (just like you do) is also similar to you. How do you decide what is similar enough and what is not?
We live in a world effectively devoid of borderline cases. Humans are clearly close enough, since they all act like they’re thinking in basically similar fashions, and other species are clearly not. I will have to reconsider this when we encounter non-human intelligences, but for now I have zero data on those, and thus cannot form a meaningful opinion.
I suggest you taboo the word clearly. For example, it is not at all clearly to me that a 6 month infant experience consciousness as I do. But if the infant does, then surely an adult chimpanzee do too?
Well, it is possible to make an argument based on the Self-Sampling Assumption that only people who share the rare inherent trait X with me are conscious.
I’ve read the sequences and have a pretty solid grip on what the LW orthodox position is on epistemology and a number of other issues—anyone need some clarification on any points?
Could you summarise the point of/ the conclusions of the posts about second order logic and Gödel’s theorems in the Epistemology Sequence? I didn’t understand them, but I’d like to know where they were heading at least.
I don’t quite have the mathematical background and sophistication to grok those posts as well, but I did get their purpose—to hook mathematicians into thinking about the open problems that Eliezer and MIRI have identified as being relevant.
I’m guessing you think free will is a trivial problem, what about consciousness? That still baffles me.
The most apt description I’ve found is something along the lines of “consciousness is what information-processing feels like from the inside.”
It’s not just about the what a brain does, because a simulated brain would still be conscious, despite not being made of neurons. It’s about certain kinds of patterns of thought (not the physical neural action, but thought as in operation performed on data). Human brains have it, insects don’t, anything in between is something for actual specialists to discuss. But what it is—the pattern of data processing—isn’t all that mysterious.
Okay but why does information processing feel like anything at all? There are cognitive processes that are information processing but you are not conscious of them.
How do you know?
I find it awfully suspicious that the vast majority of humans talk about experiencing consciousness. It’d be very strange if they were doing so for no reason, so I think that the human brain has some kind of pattern of thought that causes talking about consciousness.
For brevity, I call that-kind-of-thinking-that-causes-people-to-talk-about-consciousness “consciousness”.
Definition of “it has it if it talks about it” is problematic. You can make a very simple machine that talks about experiencing consciousness.
And that simple machine does so because it was made to do so by people experiencing consciousness.
How do you know?
I find it awfully suspicious that the vast majority of humans talk about experiencing consciousness. It’d be very strange if they were doing so for no reason.
And if interaction with such machines is the only ground you have for thinking that anything experiences consciousness, I think it would be reasonable to say that “consciousness” is whatever it is that makes those machines talk that way.
In practice, much of our notion of “consciousness” comes from observing our own mental workings, and I think we each have pretty good evidence that other people function quite similarly to ourselves, all of which makes that scenario unlikely to be the one we’re actually in.
How does anyone learn what the term “consciousness” applies to? So far as I can tell, it’s universally by observing human beings (who are, so far as anyone can tell, implemented almost entirely in human brains) and most specifically themselves. So it seems that if “consciousness” refers to anything at all, it refers to something human brains—or at least human beings—have. (I would say the same thing about “intelligence” and “humanity” and “personhood”.)
I suppose it’s just barely possible that, e.g., someone might find good evidence that many human beings are actually some kind of puppets controlled from outside the Matrix. In that case we might want to say that some human brains have consciousness but not all. This seems improbable enough—it seems on a par with discovering that we’re in a simulation where the electrical conductivity of copper emerges naturally from the underlying laws, while the electrical conductivity of iron is hacked in case by case by experimenters who are deliberately misleading us about what the laws are—that I feel perfectly comfortable ignoring the possibility until some actual evidence comes along.
I know I’m conscious because I experience it. As for everyone else, really I’m generalizing from one example.
So do I, but it doesn’t help me to assess the consciousness of others.
Occam’s Razor. All these people seem similar to me in so many ways, they’re probably similar in this way too, especially if they all say that they are.
The little box that claims it experiences consciousness (just like you do) is also similar to you. How do you decide what is similar enough and what is not?
We live in a world effectively devoid of borderline cases. Humans are clearly close enough, since they all act like they’re thinking in basically similar fashions, and other species are clearly not. I will have to reconsider this when we encounter non-human intelligences, but for now I have zero data on those, and thus cannot form a meaningful opinion.
I suggest you taboo the word clearly. For example, it is not at all clearly to me that a 6 month infant experience consciousness as I do. But if the infant does, then surely an adult chimpanzee do too?
See where it’s going?
Well, it is possible to make an argument based on the Self-Sampling Assumption that only people who share the rare inherent trait X with me are conscious.
Is is a sort of trait the talking box can’t possibly have?