“On the contrary, most people don’t care whether it is conscious in some deep philosophical sense.”
Do you mean that people don’t care if they are philosophical zombies or not? I think they care very much. I also think that you’re eliding the point a bit by using “deep” as a way to hand wave the problem away. The problem of consciousness is not some arcane issue that only matters to philosophers in their ivory towers. It is difficult. It is unsolved. And… and this is important. it is a very large problem, so large that we should not spend decades exploring false leads. I believe strong AI proponents have wasted 40 years of time and energy pursuing a ill advised research program. Resources that could have better been spent in more productive ways.
That’s why I think this is so important. You have to get things right, get your basic “vector” right otherwise you’ll get lost because the problem is so large once you make a mistake about what it is you are doing you’re done for. The “brain stabbers” are in my opinion headed in the right direction. The “let’s throw more parallel processors connected in novel topologies at it” crowd are not.
“Moreover, the primary worry discussed on LW as far as AI is concerned is that the AI will bootstrap itself in a way that results in a very unpleasant bad singularity.”
Sounds like more magical thinking if you ask me. Is bootstrapping a real phenomenon? In the real world is there any physical process that arises out of nothing?
“And yes, I am familiar with behaviorism in the sense that is discussed in that section. But it still isn’t an attempt to explain consciousness.”
Yes it is. In every lecture I have heard when the history of the philosophy of mind is recounted the behaviorism of the 50′s and early 60′s it’s main arguments for and against it as an explanation of consciousness are given. This is just part of the standard literature. I know that cognitive/behavioral therapeutic models are in wide use and very successful but that is simply beside the point here.
“So I don’t follow you at all here, and it doesn’t even look like there’s any argument you’ve made here other than just some sort of conclusion.”
Are you kidding!??? It was nothing BUT argument. Here, let me make it more explicit.
Premise 1 “If it is raining, Mr. Smith will use his umbrella.”
Premise 2 “It is raining”
Conclusion “therefore Mr. Smith will use his umbrella.”
That is a behaviorist explanation for consciousness. It is logically valid but still fails because we all know that Mr. Smith just might decide not to use his umbrella. Maybe that day he decides he likes getting wet. You cannot deduce intent from behavior. If you cannot deduce intent from behavior then behavior cannot constitute intentionality.
“So, on LW there’s a general expectation of civility, and I suspect that that general expectation doesn’t go away when one punctuates with a winky-emoticon.”
It’s a joke hun. I thought you would get the reference to Ned Block’s counter argument to behaviorism. It shows how an unconscious machine could pass the Turing test. I’m pretty sure that Steven Moffat must have been aware of it and created the Teselecta.
Suppose we build a robot and instead of robot brain we put in a radio receiver. The robot can look and move just like any human. Suppose then that we take the nation of China and give everyone a transceiver and a rule they must follow. For each individual if they receive as input state S1 they will then output state S2. They are all connected in a functional flowchart that perfectly replicates a human brain. The robot then looks moves and above all talks just like any human being. It passes the Turing test.
Is “Blockhead” (the name affectionately given to this robot) conscious?
No it is not. A non-intelligent machine passes the behaviorist Turing test for an intelligent AI. Therefore behaviorism cannot explain consciousness and an intelligent AI could never be constructed from a database of behaviors. (Which is essentially what all attempts at computer AI consist of. A database and a set of rules for accessing them.)
“On the contrary, most people don’t care whether it is conscious in some deep philosophical sense.”
Do you mean that people don’t care if they are philosophical zombies or not?
If you look above, you’ll note that the statement you’ve quoted was in response to your claim that “people want is a living conscious artificial mind” and my sentence after the one you are quoting is also about AI. So if it helps, replace “it” with “functional general AI” and reread the above. (Although frankly, I’m confused by how you interpreted the question given that the rest of your paragraph deals with AI.)
But I think it is actually worth touching on your question: Do people care if they are philosophical zombies? I suspect that by and large the answer is “no”. While many people care about whether they have free will in any meaningful sense, the question of qualia simply isn’t something that’s widely discussed at all. Moreover, whether a given individual think that they have qualia in any useful sense almost certainly doesn’t impact how they think they should be treated.
The problem of consciousness is not some arcane issue that only matters to philosophers in their ivory towers. It is difficult. It is unsolved. And… and this is important. it is a very large problem, so large that we should not spend decades exploring false leads. I believe strong AI proponents have wasted 40 years of time and energy pursuing a ill advised research program. Resources that could have better been spent in more productive ways.
If a problem is large, exploring false leads is going to be inevitable. This is true even for small problems. Moreover, I’m not sure what you mean by “strong AI proponents” in this context. Very few people actively work towards research directly aimed at building strong AI, and the research that does go in that direction often turns out to be useful in weaker cases like machine learning. That’s how for example we now have practical systems with neural nets that are quite helpful.
Sounds like more magical thinking if you ask me. Is bootstrapping a real phenomenon? In the real world is there any physical process that arises out of nothing?
So insisting that thinking has to occur in a specific substrate is not magical thinking but self-improvement is? Bootstraping doesn’t involve physical processes arising out of nothing. The essential idea in most variants is self-modification producing a more and more powerful AI. There are precedents for this sort of thing. Human civilization for example has essentially self-modified itself, albeit at a slow rate, over time.
“And yes, I am familiar with behaviorism in the sense that is discussed in that section. But it still isn’t an attempt to explain consciousness.”
Yes it is. In every lecture I have heard when the history of the philosophy of mind is recounted the behaviorism of the 50′s and early 60′s it’s main arguments for and against it as an explanation of consciousness are given.
I suspect this is a definitional issue. What do you think behaviorism says that is an attempt to explaine consciousness and not just argue that it doesn’t need an explanation?
Premise 1 “If it is raining, Mr. Smith will use his umbrella.” Premise 2 “It is raining” Conclusion “therefore Mr. Smith will use his umbrella.”
That is a behaviorist explanation for consciousness. It is logically valid but still fails because we all know that Mr. Smith just might decide not to use his umbrella. Maybe that day he decides he likes getting wet. You cannot deduce intent from behavior. If you cannot deduce intent from behavior then behavior cannot constitute intentionality.
Ok. I think I’m beginning to see the problem to some extent, and I wonder how much this is due to trying to talk about behaviorism in a non-behaviorist framework. The behaviorist isn’t making any claim about “intent” at all. Behaviorism just tries to talk about behavior. Similarly “decides” isn’t a statement that goes into their model. Moreover, the fact that some days Smith does one thing in response to rain and sometimes does other things isn’t a criticism of behaviorism: In order to argue it is one needs to be claiming that some sort of free willed decision is going on, rather than subtle differences in the day or recent experiences. The objection then isn’t to behaviorism, but rather one’s asserting a strong notion of free will.
I thought you would get the reference to Ned Block’s counter argument to behaviorism. It shows how an unconscious machine could pass the Turing test
It may help to be aware of illusion of transparency. Oblique references are one of the easiest things to miscommunicate about. But yes, I’m familiar with Block’s look-up table argument. It isn’t clear how it is relevant here: Yes, the argument raises issues with many purely descriptive notions of consciousness, especially funcitonalism. But it isn’t an argument that consciousness needs to involve free will and qualia and who knows what else. If anything, it is a decent argument that the whole notion of consciousness is fatally confused.
Is “Blockhead” (the name affectionately given to this robot) conscious?
No it is not.
So everything here is essentially just smuggling in the conclusion you want in other words. It might help to ask if you can give a definition of consciousness.
I’m pretty sure that Steven Moffat must have been aware of it and created the Teselecta.
Massive illusion of transparency here- you’re presuming that Moffat is thinking about the same things that you are. The idea of miniature people running a person has been around for a long-time. Prior examples include a series of Sunday strips of Calvin and Hobbes, as well as a truly awful Eddie Murphy movie.
“On the contrary, most people don’t care whether it is conscious in some deep philosophical sense.”
Do you mean that people don’t care if they are philosophical zombies or not? I think they care very much. I also think that you’re eliding the point a bit by using “deep” as a way to hand wave the problem away. The problem of consciousness is not some arcane issue that only matters to philosophers in their ivory towers. It is difficult. It is unsolved. And… and this is important. it is a very large problem, so large that we should not spend decades exploring false leads. I believe strong AI proponents have wasted 40 years of time and energy pursuing a ill advised research program. Resources that could have better been spent in more productive ways.
That’s why I think this is so important. You have to get things right, get your basic “vector” right otherwise you’ll get lost because the problem is so large once you make a mistake about what it is you are doing you’re done for. The “brain stabbers” are in my opinion headed in the right direction. The “let’s throw more parallel processors connected in novel topologies at it” crowd are not.
“Moreover, the primary worry discussed on LW as far as AI is concerned is that the AI will bootstrap itself in a way that results in a very unpleasant bad singularity.”
Sounds like more magical thinking if you ask me. Is bootstrapping a real phenomenon? In the real world is there any physical process that arises out of nothing?
“And yes, I am familiar with behaviorism in the sense that is discussed in that section. But it still isn’t an attempt to explain consciousness.”
Yes it is. In every lecture I have heard when the history of the philosophy of mind is recounted the behaviorism of the 50′s and early 60′s it’s main arguments for and against it as an explanation of consciousness are given. This is just part of the standard literature. I know that cognitive/behavioral therapeutic models are in wide use and very successful but that is simply beside the point here.
“So I don’t follow you at all here, and it doesn’t even look like there’s any argument you’ve made here other than just some sort of conclusion.”
Are you kidding!??? It was nothing BUT argument. Here, let me make it more explicit.
Premise 1 “If it is raining, Mr. Smith will use his umbrella.” Premise 2 “It is raining” Conclusion “therefore Mr. Smith will use his umbrella.”
That is a behaviorist explanation for consciousness. It is logically valid but still fails because we all know that Mr. Smith just might decide not to use his umbrella. Maybe that day he decides he likes getting wet. You cannot deduce intent from behavior. If you cannot deduce intent from behavior then behavior cannot constitute intentionality.
“So, on LW there’s a general expectation of civility, and I suspect that that general expectation doesn’t go away when one punctuates with a winky-emoticon.”
It’s a joke hun. I thought you would get the reference to Ned Block’s counter argument to behaviorism. It shows how an unconscious machine could pass the Turing test. I’m pretty sure that Steven Moffat must have been aware of it and created the Teselecta.
Suppose we build a robot and instead of robot brain we put in a radio receiver. The robot can look and move just like any human. Suppose then that we take the nation of China and give everyone a transceiver and a rule they must follow. For each individual if they receive as input state S1 they will then output state S2. They are all connected in a functional flowchart that perfectly replicates a human brain. The robot then looks moves and above all talks just like any human being. It passes the Turing test.
Is “Blockhead” (the name affectionately given to this robot) conscious?
No it is not. A non-intelligent machine passes the behaviorist Turing test for an intelligent AI. Therefore behaviorism cannot explain consciousness and an intelligent AI could never be constructed from a database of behaviors. (Which is essentially what all attempts at computer AI consist of. A database and a set of rules for accessing them.)
If you look above, you’ll note that the statement you’ve quoted was in response to your claim that “people want is a living conscious artificial mind” and my sentence after the one you are quoting is also about AI. So if it helps, replace “it” with “functional general AI” and reread the above. (Although frankly, I’m confused by how you interpreted the question given that the rest of your paragraph deals with AI.)
But I think it is actually worth touching on your question: Do people care if they are philosophical zombies? I suspect that by and large the answer is “no”. While many people care about whether they have free will in any meaningful sense, the question of qualia simply isn’t something that’s widely discussed at all. Moreover, whether a given individual think that they have qualia in any useful sense almost certainly doesn’t impact how they think they should be treated.
If a problem is large, exploring false leads is going to be inevitable. This is true even for small problems. Moreover, I’m not sure what you mean by “strong AI proponents” in this context. Very few people actively work towards research directly aimed at building strong AI, and the research that does go in that direction often turns out to be useful in weaker cases like machine learning. That’s how for example we now have practical systems with neural nets that are quite helpful.
So insisting that thinking has to occur in a specific substrate is not magical thinking but self-improvement is? Bootstraping doesn’t involve physical processes arising out of nothing. The essential idea in most variants is self-modification producing a more and more powerful AI. There are precedents for this sort of thing. Human civilization for example has essentially self-modified itself, albeit at a slow rate, over time.
I suspect this is a definitional issue. What do you think behaviorism says that is an attempt to explaine consciousness and not just argue that it doesn’t need an explanation?
Ok. I think I’m beginning to see the problem to some extent, and I wonder how much this is due to trying to talk about behaviorism in a non-behaviorist framework. The behaviorist isn’t making any claim about “intent” at all. Behaviorism just tries to talk about behavior. Similarly “decides” isn’t a statement that goes into their model. Moreover, the fact that some days Smith does one thing in response to rain and sometimes does other things isn’t a criticism of behaviorism: In order to argue it is one needs to be claiming that some sort of free willed decision is going on, rather than subtle differences in the day or recent experiences. The objection then isn’t to behaviorism, but rather one’s asserting a strong notion of free will.
It may help to be aware of illusion of transparency. Oblique references are one of the easiest things to miscommunicate about. But yes, I’m familiar with Block’s look-up table argument. It isn’t clear how it is relevant here: Yes, the argument raises issues with many purely descriptive notions of consciousness, especially funcitonalism. But it isn’t an argument that consciousness needs to involve free will and qualia and who knows what else. If anything, it is a decent argument that the whole notion of consciousness is fatally confused.
So everything here is essentially just smuggling in the conclusion you want in other words. It might help to ask if you can give a definition of consciousness.
Massive illusion of transparency here- you’re presuming that Moffat is thinking about the same things that you are. The idea of miniature people running a person has been around for a long-time. Prior examples include a series of Sunday strips of Calvin and Hobbes, as well as a truly awful Eddie Murphy movie.