Is there something that it is like to be Siri? Still, Siri is a tool and potentially a powerful one. But I feel no need to be afraid of Siri as Siri any more than I am afraid of nuclear weapons in themselves. What frightens me is how people might misuse them. Not the tools themselves. Focusing on the tools then does not address the root issue. Which is human nature and what social structures we have in place to make sure some clown doesn’t build a nuke in his basement.
Did ELIZA present the “dangers and promises” of AI? Weizenbaum’s secretary thought so. She thought it passed the Turing test. Did it? Will future AI tools really be indistinguishable from living beings? I doubt it. I think it will always be apparent to people that they are dealing with a software tool that makes it easier for them to do something.
If behaviorism has been rejected as an explanation for consciousness how can one appeal to behaviorism as a model for future AI?
--
“so what evidence for this claimed proportion is there?”
Oh, I was just being flippant. It is a law of the universe that if there is a joke to be made I must at least try for it. ;)
“I don’t see how this is a corollary. ”
Yeah, also not serious. I meant only to mock the eternal claim of fusion proponents that it is always “just around the corner”. I remember as a child reading breathless articles in Popular Science in the 70′s about the immanent breakthroughs in nuclear fusion “any day now”. Just like AI researchers of that day. And 40 years later little has changed.
I do not mistake Google translate for a conscious entity. Neither does anyone else. I can see no reason to believe that will change in the next 40 years.
“Examples include tabtletop designs that can be made by hobbyists.”
Well now, that was cool. But yeah, no net increase in energy. Still, good for him.
I’m not sure what you mean by this question. Is this a variant of what it is like to be a bat? There’s a decent argument that such questions don’t make sense. But this doesn’t matter much: Whether some AI has qualia or not doesn’t change any of the external behavior, than for most purposes like existential risk it doesn’t matter.
I doubt it. I think it will always be apparent to people that they are dealing with a software tool that makes it easier for
This and most of the rest of your post are assertions, not arguments.
If behaviorism has been rejected as an explanation for consciousness how can one appeal to behaviorism as a model for future AI?
First, what do you mean by behaviorism in this context? Behaviorism as that word is classically defined isn’t an attempt to explain consciousness. It doesn’t care about consciousness at all.
“Is this a variant of what it is like to be a bat?”
Is there something that it is like to be you? There are also decent arguments that qualia does matter. It is hardly a settled matter. If anything, the philosophical consensus is that qualia is important.
“Whether some AI has qualia or not doesn’t change any of the external behavior,”
Yes, behaviorism is a very attractive solution. But presumably what people want is a living conscious artificial mind and not a useful house maid in robot form. I can get that functionality right now.
If I write a program that allows my PC to speak in perfect English and in a perfectly human voice can my computer talk to me? Can it say hello? Yes it can, Can it greet me hello? No, it cannot because it cannot intend to say hello.
“Behaviorism as that word is classically defined isn’t an attempt to explain consciousness.”
Wikipedia? Really? Did you even bother to read the page or are you just pointing to something on wikipedia and believing that constitutes an argument? Look at section 5 “Behaviorism in philosophy”. Read that and follow the link to the Philosophy of Mind article. Read that. You will discover that behaviorism was at one time thought to be a valid theory of mind. That all we needed to do to explain human behavior was to describe human behavior.
“If it is raining, Mr. Smith will use his umbrella. It is raining, therefore Mr. Smith will use his umbrella.” Is this a valid deduction? No, it isn’t because consciousness is not behavior only.
If you are a fan of Doctor Who, is the Teselecta conscious? Is there something that it is like to be the Teselecta? My answer is no, there is nothing it is like to be a robot piloted by miniature people emulating the behavior of a real conscious person.
I’m not sure this question is any better formed. “What it is like to be an X” doesn’t seem to have any coherent meaning when one presses people about what they actually are talking about.
If anything, the philosophical consensus is that qualia is important.
Taking qualia seriously as a question is a distinct claim than qualia actually having anything substantial to do with consciousness. I’m not sure of specific acceptance levels of qualia, but the fact is that a majority of philosophers either accept physicalism or lean towards it. So I’m not sure how to reconcile that with your claim.
Yes, behaviorism is a very attractive solution. But presumably what people want is a living conscious artificial mind and not a useful house maid in robot form. I can get that functionality right now.
On the contrary, most people don’t care whether it is conscious in some deep philosophical sense. In fact, having functional AI that are completely not conscious have certain advantages- such as being less of an ethical problem in sending them to be destroyed (say as robot soldiers, or as probes to other planets). Moreover, the primary worry discussed on LW as far as AI is concerned is that the AI will bootstrap itself in a way that results in a very unpleasant bad singularity. Whether the AI is truly conscious or not has nothing to do with that worry.
Wikipedia? Really?
Yes, for many purposes Wikipedia is quite useful and reasonably reliable as a source. In many fields (math and chemistry for example) articles have been written by actual experts in the fields.
Did you even bother to read the page or are you just pointing to something on wikipedia and believing that constitutes an argument?
My primary intent for the link was for its use in the introduction where it uses the fairly standard notion that “that psychology should concern itself with the observable behavior of people and animals, not with unobservable events that take place in their minds.” It is incidentally useful to understand behaviorism in most senses of the term went away not due to arguments about things like qualia, but rather that advances in neuroscience and related areas allowed us to get much more direct access to what was going on inside. At some level, psychology is still controlled by behaviorism if one interprets that to include brain activity as behavior.
And yes, I am familiar with behaviorism in the sense that is discussed in that section. But it still isn’t an attempt to explain consciousness. It is essentially an argument that psychology doesn’t need to explain consciousness. These aren’t the same thing.
“If it is raining, Mr. Smith will use his umbrella. It is raining, therefore Mr. Smith will use his umbrella.” Is this a valid deduction? No, it isn’t because consciousness is not behavior only.
So I don’t follow you at all here, and it doesn’t even look like there’s any argument you’ve made here other than just some sort of conclusion. But I don’t see where in the notion of “deduction” consciousness comes in. Are you using some non-standard definition of “use” or of “umbrella”?
If you are a fan of Doctor Who, is the Teselecta conscious? Is there something that it is like to be the Teselecta? My answer is no, there is nothing it is like to be a robot piloted by miniature people emulating the behavior of a real conscious person.
Don’t be a blockhead. ;)
So, on LW there’s a general expectation of civility, and I suspect that that general expectation doesn’t go away when one punctuates with a winky-emoticon.
“On the contrary, most people don’t care whether it is conscious in some deep philosophical sense.”
Do you mean that people don’t care if they are philosophical zombies or not? I think they care very much. I also think that you’re eliding the point a bit by using “deep” as a way to hand wave the problem away. The problem of consciousness is not some arcane issue that only matters to philosophers in their ivory towers. It is difficult. It is unsolved. And… and this is important. it is a very large problem, so large that we should not spend decades exploring false leads. I believe strong AI proponents have wasted 40 years of time and energy pursuing a ill advised research program. Resources that could have better been spent in more productive ways.
That’s why I think this is so important. You have to get things right, get your basic “vector” right otherwise you’ll get lost because the problem is so large once you make a mistake about what it is you are doing you’re done for. The “brain stabbers” are in my opinion headed in the right direction. The “let’s throw more parallel processors connected in novel topologies at it” crowd are not.
“Moreover, the primary worry discussed on LW as far as AI is concerned is that the AI will bootstrap itself in a way that results in a very unpleasant bad singularity.”
Sounds like more magical thinking if you ask me. Is bootstrapping a real phenomenon? In the real world is there any physical process that arises out of nothing?
“And yes, I am familiar with behaviorism in the sense that is discussed in that section. But it still isn’t an attempt to explain consciousness.”
Yes it is. In every lecture I have heard when the history of the philosophy of mind is recounted the behaviorism of the 50′s and early 60′s it’s main arguments for and against it as an explanation of consciousness are given. This is just part of the standard literature. I know that cognitive/behavioral therapeutic models are in wide use and very successful but that is simply beside the point here.
“So I don’t follow you at all here, and it doesn’t even look like there’s any argument you’ve made here other than just some sort of conclusion.”
Are you kidding!??? It was nothing BUT argument. Here, let me make it more explicit.
Premise 1 “If it is raining, Mr. Smith will use his umbrella.”
Premise 2 “It is raining”
Conclusion “therefore Mr. Smith will use his umbrella.”
That is a behaviorist explanation for consciousness. It is logically valid but still fails because we all know that Mr. Smith just might decide not to use his umbrella. Maybe that day he decides he likes getting wet. You cannot deduce intent from behavior. If you cannot deduce intent from behavior then behavior cannot constitute intentionality.
“So, on LW there’s a general expectation of civility, and I suspect that that general expectation doesn’t go away when one punctuates with a winky-emoticon.”
It’s a joke hun. I thought you would get the reference to Ned Block’s counter argument to behaviorism. It shows how an unconscious machine could pass the Turing test. I’m pretty sure that Steven Moffat must have been aware of it and created the Teselecta.
Suppose we build a robot and instead of robot brain we put in a radio receiver. The robot can look and move just like any human. Suppose then that we take the nation of China and give everyone a transceiver and a rule they must follow. For each individual if they receive as input state S1 they will then output state S2. They are all connected in a functional flowchart that perfectly replicates a human brain. The robot then looks moves and above all talks just like any human being. It passes the Turing test.
Is “Blockhead” (the name affectionately given to this robot) conscious?
No it is not. A non-intelligent machine passes the behaviorist Turing test for an intelligent AI. Therefore behaviorism cannot explain consciousness and an intelligent AI could never be constructed from a database of behaviors. (Which is essentially what all attempts at computer AI consist of. A database and a set of rules for accessing them.)
“On the contrary, most people don’t care whether it is conscious in some deep philosophical sense.”
Do you mean that people don’t care if they are philosophical zombies or not?
If you look above, you’ll note that the statement you’ve quoted was in response to your claim that “people want is a living conscious artificial mind” and my sentence after the one you are quoting is also about AI. So if it helps, replace “it” with “functional general AI” and reread the above. (Although frankly, I’m confused by how you interpreted the question given that the rest of your paragraph deals with AI.)
But I think it is actually worth touching on your question: Do people care if they are philosophical zombies? I suspect that by and large the answer is “no”. While many people care about whether they have free will in any meaningful sense, the question of qualia simply isn’t something that’s widely discussed at all. Moreover, whether a given individual think that they have qualia in any useful sense almost certainly doesn’t impact how they think they should be treated.
The problem of consciousness is not some arcane issue that only matters to philosophers in their ivory towers. It is difficult. It is unsolved. And… and this is important. it is a very large problem, so large that we should not spend decades exploring false leads. I believe strong AI proponents have wasted 40 years of time and energy pursuing a ill advised research program. Resources that could have better been spent in more productive ways.
If a problem is large, exploring false leads is going to be inevitable. This is true even for small problems. Moreover, I’m not sure what you mean by “strong AI proponents” in this context. Very few people actively work towards research directly aimed at building strong AI, and the research that does go in that direction often turns out to be useful in weaker cases like machine learning. That’s how for example we now have practical systems with neural nets that are quite helpful.
Sounds like more magical thinking if you ask me. Is bootstrapping a real phenomenon? In the real world is there any physical process that arises out of nothing?
So insisting that thinking has to occur in a specific substrate is not magical thinking but self-improvement is? Bootstraping doesn’t involve physical processes arising out of nothing. The essential idea in most variants is self-modification producing a more and more powerful AI. There are precedents for this sort of thing. Human civilization for example has essentially self-modified itself, albeit at a slow rate, over time.
“And yes, I am familiar with behaviorism in the sense that is discussed in that section. But it still isn’t an attempt to explain consciousness.”
Yes it is. In every lecture I have heard when the history of the philosophy of mind is recounted the behaviorism of the 50′s and early 60′s it’s main arguments for and against it as an explanation of consciousness are given.
I suspect this is a definitional issue. What do you think behaviorism says that is an attempt to explaine consciousness and not just argue that it doesn’t need an explanation?
Premise 1 “If it is raining, Mr. Smith will use his umbrella.” Premise 2 “It is raining” Conclusion “therefore Mr. Smith will use his umbrella.”
That is a behaviorist explanation for consciousness. It is logically valid but still fails because we all know that Mr. Smith just might decide not to use his umbrella. Maybe that day he decides he likes getting wet. You cannot deduce intent from behavior. If you cannot deduce intent from behavior then behavior cannot constitute intentionality.
Ok. I think I’m beginning to see the problem to some extent, and I wonder how much this is due to trying to talk about behaviorism in a non-behaviorist framework. The behaviorist isn’t making any claim about “intent” at all. Behaviorism just tries to talk about behavior. Similarly “decides” isn’t a statement that goes into their model. Moreover, the fact that some days Smith does one thing in response to rain and sometimes does other things isn’t a criticism of behaviorism: In order to argue it is one needs to be claiming that some sort of free willed decision is going on, rather than subtle differences in the day or recent experiences. The objection then isn’t to behaviorism, but rather one’s asserting a strong notion of free will.
I thought you would get the reference to Ned Block’s counter argument to behaviorism. It shows how an unconscious machine could pass the Turing test
It may help to be aware of illusion of transparency. Oblique references are one of the easiest things to miscommunicate about. But yes, I’m familiar with Block’s look-up table argument. It isn’t clear how it is relevant here: Yes, the argument raises issues with many purely descriptive notions of consciousness, especially funcitonalism. But it isn’t an argument that consciousness needs to involve free will and qualia and who knows what else. If anything, it is a decent argument that the whole notion of consciousness is fatally confused.
Is “Blockhead” (the name affectionately given to this robot) conscious?
No it is not.
So everything here is essentially just smuggling in the conclusion you want in other words. It might help to ask if you can give a definition of consciousness.
I’m pretty sure that Steven Moffat must have been aware of it and created the Teselecta.
Massive illusion of transparency here- you’re presuming that Moffat is thinking about the same things that you are. The idea of miniature people running a person has been around for a long-time. Prior examples include a series of Sunday strips of Calvin and Hobbes, as well as a truly awful Eddie Murphy movie.
Is there something that it is like to be Siri? Still, Siri is a tool and potentially a powerful one. But I feel no need to be afraid of Siri as Siri any more than I am afraid of nuclear weapons in themselves. What frightens me is how people might misuse them. Not the tools themselves. Focusing on the tools then does not address the root issue. Which is human nature and what social structures we have in place to make sure some clown doesn’t build a nuke in his basement.
Did ELIZA present the “dangers and promises” of AI? Weizenbaum’s secretary thought so. She thought it passed the Turing test. Did it? Will future AI tools really be indistinguishable from living beings? I doubt it. I think it will always be apparent to people that they are dealing with a software tool that makes it easier for them to do something.
If behaviorism has been rejected as an explanation for consciousness how can one appeal to behaviorism as a model for future AI?
--
“so what evidence for this claimed proportion is there?”
Oh, I was just being flippant. It is a law of the universe that if there is a joke to be made I must at least try for it. ;)
“I don’t see how this is a corollary. ”
Yeah, also not serious. I meant only to mock the eternal claim of fusion proponents that it is always “just around the corner”. I remember as a child reading breathless articles in Popular Science in the 70′s about the immanent breakthroughs in nuclear fusion “any day now”. Just like AI researchers of that day. And 40 years later little has changed.
I do not mistake Google translate for a conscious entity. Neither does anyone else. I can see no reason to believe that will change in the next 40 years.
“Examples include tabtletop designs that can be made by hobbyists.”
Well now, that was cool. But yeah, no net increase in energy. Still, good for him.
I’m not sure what you mean by this question. Is this a variant of what it is like to be a bat? There’s a decent argument that such questions don’t make sense. But this doesn’t matter much: Whether some AI has qualia or not doesn’t change any of the external behavior, than for most purposes like existential risk it doesn’t matter.
This and most of the rest of your post are assertions, not arguments.
First, what do you mean by behaviorism in this context? Behaviorism as that word is classically defined isn’t an attempt to explain consciousness. It doesn’t care about consciousness at all.
“Is this a variant of what it is like to be a bat?”
Is there something that it is like to be you? There are also decent arguments that qualia does matter. It is hardly a settled matter. If anything, the philosophical consensus is that qualia is important.
“Whether some AI has qualia or not doesn’t change any of the external behavior,”
Yes, behaviorism is a very attractive solution. But presumably what people want is a living conscious artificial mind and not a useful house maid in robot form. I can get that functionality right now.
If I write a program that allows my PC to speak in perfect English and in a perfectly human voice can my computer talk to me? Can it say hello? Yes it can, Can it greet me hello? No, it cannot because it cannot intend to say hello.
“Behaviorism as that word is classically defined isn’t an attempt to explain consciousness.”
Wikipedia? Really? Did you even bother to read the page or are you just pointing to something on wikipedia and believing that constitutes an argument? Look at section 5 “Behaviorism in philosophy”. Read that and follow the link to the Philosophy of Mind article. Read that. You will discover that behaviorism was at one time thought to be a valid theory of mind. That all we needed to do to explain human behavior was to describe human behavior.
“If it is raining, Mr. Smith will use his umbrella. It is raining, therefore Mr. Smith will use his umbrella.” Is this a valid deduction? No, it isn’t because consciousness is not behavior only.
If you are a fan of Doctor Who, is the Teselecta conscious? Is there something that it is like to be the Teselecta? My answer is no, there is nothing it is like to be a robot piloted by miniature people emulating the behavior of a real conscious person.
Don’t be a blockhead. ;)
I’m not sure this question is any better formed. “What it is like to be an X” doesn’t seem to have any coherent meaning when one presses people about what they actually are talking about.
Taking qualia seriously as a question is a distinct claim than qualia actually having anything substantial to do with consciousness. I’m not sure of specific acceptance levels of qualia, but the fact is that a majority of philosophers either accept physicalism or lean towards it. So I’m not sure how to reconcile that with your claim.
On the contrary, most people don’t care whether it is conscious in some deep philosophical sense. In fact, having functional AI that are completely not conscious have certain advantages- such as being less of an ethical problem in sending them to be destroyed (say as robot soldiers, or as probes to other planets). Moreover, the primary worry discussed on LW as far as AI is concerned is that the AI will bootstrap itself in a way that results in a very unpleasant bad singularity. Whether the AI is truly conscious or not has nothing to do with that worry.
Yes, for many purposes Wikipedia is quite useful and reasonably reliable as a source. In many fields (math and chemistry for example) articles have been written by actual experts in the fields.
My primary intent for the link was for its use in the introduction where it uses the fairly standard notion that “that psychology should concern itself with the observable behavior of people and animals, not with unobservable events that take place in their minds.” It is incidentally useful to understand behaviorism in most senses of the term went away not due to arguments about things like qualia, but rather that advances in neuroscience and related areas allowed us to get much more direct access to what was going on inside. At some level, psychology is still controlled by behaviorism if one interprets that to include brain activity as behavior.
And yes, I am familiar with behaviorism in the sense that is discussed in that section. But it still isn’t an attempt to explain consciousness. It is essentially an argument that psychology doesn’t need to explain consciousness. These aren’t the same thing.
So I don’t follow you at all here, and it doesn’t even look like there’s any argument you’ve made here other than just some sort of conclusion. But I don’t see where in the notion of “deduction” consciousness comes in. Are you using some non-standard definition of “use” or of “umbrella”?
So, on LW there’s a general expectation of civility, and I suspect that that general expectation doesn’t go away when one punctuates with a winky-emoticon.
“On the contrary, most people don’t care whether it is conscious in some deep philosophical sense.”
Do you mean that people don’t care if they are philosophical zombies or not? I think they care very much. I also think that you’re eliding the point a bit by using “deep” as a way to hand wave the problem away. The problem of consciousness is not some arcane issue that only matters to philosophers in their ivory towers. It is difficult. It is unsolved. And… and this is important. it is a very large problem, so large that we should not spend decades exploring false leads. I believe strong AI proponents have wasted 40 years of time and energy pursuing a ill advised research program. Resources that could have better been spent in more productive ways.
That’s why I think this is so important. You have to get things right, get your basic “vector” right otherwise you’ll get lost because the problem is so large once you make a mistake about what it is you are doing you’re done for. The “brain stabbers” are in my opinion headed in the right direction. The “let’s throw more parallel processors connected in novel topologies at it” crowd are not.
“Moreover, the primary worry discussed on LW as far as AI is concerned is that the AI will bootstrap itself in a way that results in a very unpleasant bad singularity.”
Sounds like more magical thinking if you ask me. Is bootstrapping a real phenomenon? In the real world is there any physical process that arises out of nothing?
“And yes, I am familiar with behaviorism in the sense that is discussed in that section. But it still isn’t an attempt to explain consciousness.”
Yes it is. In every lecture I have heard when the history of the philosophy of mind is recounted the behaviorism of the 50′s and early 60′s it’s main arguments for and against it as an explanation of consciousness are given. This is just part of the standard literature. I know that cognitive/behavioral therapeutic models are in wide use and very successful but that is simply beside the point here.
“So I don’t follow you at all here, and it doesn’t even look like there’s any argument you’ve made here other than just some sort of conclusion.”
Are you kidding!??? It was nothing BUT argument. Here, let me make it more explicit.
Premise 1 “If it is raining, Mr. Smith will use his umbrella.” Premise 2 “It is raining” Conclusion “therefore Mr. Smith will use his umbrella.”
That is a behaviorist explanation for consciousness. It is logically valid but still fails because we all know that Mr. Smith just might decide not to use his umbrella. Maybe that day he decides he likes getting wet. You cannot deduce intent from behavior. If you cannot deduce intent from behavior then behavior cannot constitute intentionality.
“So, on LW there’s a general expectation of civility, and I suspect that that general expectation doesn’t go away when one punctuates with a winky-emoticon.”
It’s a joke hun. I thought you would get the reference to Ned Block’s counter argument to behaviorism. It shows how an unconscious machine could pass the Turing test. I’m pretty sure that Steven Moffat must have been aware of it and created the Teselecta.
Suppose we build a robot and instead of robot brain we put in a radio receiver. The robot can look and move just like any human. Suppose then that we take the nation of China and give everyone a transceiver and a rule they must follow. For each individual if they receive as input state S1 they will then output state S2. They are all connected in a functional flowchart that perfectly replicates a human brain. The robot then looks moves and above all talks just like any human being. It passes the Turing test.
Is “Blockhead” (the name affectionately given to this robot) conscious?
No it is not. A non-intelligent machine passes the behaviorist Turing test for an intelligent AI. Therefore behaviorism cannot explain consciousness and an intelligent AI could never be constructed from a database of behaviors. (Which is essentially what all attempts at computer AI consist of. A database and a set of rules for accessing them.)
If you look above, you’ll note that the statement you’ve quoted was in response to your claim that “people want is a living conscious artificial mind” and my sentence after the one you are quoting is also about AI. So if it helps, replace “it” with “functional general AI” and reread the above. (Although frankly, I’m confused by how you interpreted the question given that the rest of your paragraph deals with AI.)
But I think it is actually worth touching on your question: Do people care if they are philosophical zombies? I suspect that by and large the answer is “no”. While many people care about whether they have free will in any meaningful sense, the question of qualia simply isn’t something that’s widely discussed at all. Moreover, whether a given individual think that they have qualia in any useful sense almost certainly doesn’t impact how they think they should be treated.
If a problem is large, exploring false leads is going to be inevitable. This is true even for small problems. Moreover, I’m not sure what you mean by “strong AI proponents” in this context. Very few people actively work towards research directly aimed at building strong AI, and the research that does go in that direction often turns out to be useful in weaker cases like machine learning. That’s how for example we now have practical systems with neural nets that are quite helpful.
So insisting that thinking has to occur in a specific substrate is not magical thinking but self-improvement is? Bootstraping doesn’t involve physical processes arising out of nothing. The essential idea in most variants is self-modification producing a more and more powerful AI. There are precedents for this sort of thing. Human civilization for example has essentially self-modified itself, albeit at a slow rate, over time.
I suspect this is a definitional issue. What do you think behaviorism says that is an attempt to explaine consciousness and not just argue that it doesn’t need an explanation?
Ok. I think I’m beginning to see the problem to some extent, and I wonder how much this is due to trying to talk about behaviorism in a non-behaviorist framework. The behaviorist isn’t making any claim about “intent” at all. Behaviorism just tries to talk about behavior. Similarly “decides” isn’t a statement that goes into their model. Moreover, the fact that some days Smith does one thing in response to rain and sometimes does other things isn’t a criticism of behaviorism: In order to argue it is one needs to be claiming that some sort of free willed decision is going on, rather than subtle differences in the day or recent experiences. The objection then isn’t to behaviorism, but rather one’s asserting a strong notion of free will.
It may help to be aware of illusion of transparency. Oblique references are one of the easiest things to miscommunicate about. But yes, I’m familiar with Block’s look-up table argument. It isn’t clear how it is relevant here: Yes, the argument raises issues with many purely descriptive notions of consciousness, especially funcitonalism. But it isn’t an argument that consciousness needs to involve free will and qualia and who knows what else. If anything, it is a decent argument that the whole notion of consciousness is fatally confused.
So everything here is essentially just smuggling in the conclusion you want in other words. It might help to ask if you can give a definition of consciousness.
Massive illusion of transparency here- you’re presuming that Moffat is thinking about the same things that you are. The idea of miniature people running a person has been around for a long-time. Prior examples include a series of Sunday strips of Calvin and Hobbes, as well as a truly awful Eddie Murphy movie.