Does that “human level intelligence module” have any ability to actually control the robot’s actions, or just to passively observe and then ask “why did I do that?” What’re the rules of the game, as such, here?
I don’t think it’s saying anything too shocking to admit this is all a metaphor for people; I’m going to be pushing the view that people’s thoughts and words are a byproduct of the processes that determine behavior rather than directly controlling them. I anticipate providing at least a partial answer to your question in about two weeks; if that doesn’t satisfy you, let me know and we can talk about it then.
One that presents consciousness as an epiphenomenon. In the version of the robot that has human intelligence, you describe it as bolted on, experiencing the robot’s actions but having no causal influence on them, an impotent spectator.
Are your projected postings going to justify this hypothesis?
My first thought was that this was pointing towards an epiphenomenal view of consciousness. But I think it’s actually something more radical and more testable. Yvain, check me if I get this wrong, but I think you’re saying that “our conscious verbal acts—both internally and externally directed—do not primarily cause our actions.”
Here is an experiment to test this: have people perform some verbal act repeatedly, and see if it shifts their actions. This happens to be a well known motivational and behavior-alteration technique, beloved of football teams, political campaigns, governments, and religions, among other organizations. My impression is that it works to a point, but not consistently. Has anybody done a test of how catechisms, chants, and the like shape behavior?
I hope that he explicitly deals with this. By the way, I didn’t know the actual definition of epiphenomenon, which is ” is a secondary phenomenon that occurs alongside or in parallel to a primary phenomenon”.
I’m going to be pushing the view that people’s thoughts and words are a byproduct of the processes that determine behavior rather than directly controlling them.
[An old man asked:] ’...one of my students asked me whether the enlightened man is subject to the law of causation. I answered him: “The enlightened man is not subject to the law of causation.” For this answer evidencing a clinging to absoluteness I became a fox for five hundred rebirths, and I am still a fox. Will you save me from this condition with your Zen words and let me get out of a fox’s body? Now may I ask you: Is the enlightened man subject to the law of causation?‴
Hyakujo said: `The enlightened man is one with the law of causation.′
I take this to be an elliptical way of suggesting that Yvain is offering a false dichotomy in suggesting a choice between the notion of thoughts being in control of the processes determining behavior and the notion of thoughts being a byproduct of those processes.
I agree. Thoughts are at one with (are a subset of) the processes that determine behavior.
I’m not so sure. Using the analogy of a computer program, we could think of thoughts either as like the lines of code in the program (in which case they’re at one with, or in control of, the processes generating behavior, depending on how you want to look at it) or you could think of thoughts as like the status messages that print “Reticulating splines” or “50% complete” to the screen, in which case they’re byproducts of those processes (very specific, unnatural byproducts, to boot).
My view is closer to the latter; they’re a way of allowing the brain to make inferences about its own behavior and to communicate those inferences. Opaque processes decide to go to Subway tonight because they’ve heard it’s low calorie, then they produce the verbal sentence “I should go to Subway tonight because it’s low calorie”, and then when your friend asks you why you went to Subway, you say “Because it’s low calorie”).
The tendency of thoughts to appear in a conversational phrasing (“I think I’ll go to Subway tonight”) rather than something like “Dear Broca’s Area—Please be informed that we are going to Subway tonight, and adjust your verbal behavior accordingly—yours sincerely, the prefrontal cortex” is a byproduct of their use in conversation, not their internal function.
Right now I’m just asserting that this is a possibility and that it’s distinct from thoughts being part of the decision-making structure. I’ll try to give some evidence for it later.
If you make the old mistake of confusing thoughts in general with analytic, reflective, verbal, serial internal monologue, I’m going to be sad.
Opaque processes decide to go to Subway tonight because they’ve heard it’s low calorie, then they produce the verbal sentence “I should go to Subway tonight because it’s low calorie”
I find this rather alien. Some processes are opaque, but that kind definitely isn’t. Something (hunger, time, memory of previously made plans, whatever) triggers a reusable pick-a-sandwich-shop process; names and logos of nearby stores come up; associated emotions and concepts come up; weights associated to each shift—an image of those annoying health freaks who diet all the time upvotes “tasty” and downvotes “low calorie”; eventually they stabilize, create an image of myself going to Subway rather than somewhere else, and hand it over to motor control. If something gets stuck at any point, the process stops, a little alarm rings, and internal monologue turns to it to make it come unstuck. If not, there are no verbal thoughts at any point.
Probably time to start being sad; I’m mostly going to use “thoughts” that way. But I think what I’m talking about holds for any definition of “thought” where it’s a mental activity accessible to the conscious mind.
I recognize different people use internal monologue to a different degree than others, but whether you decide with a monologue or with vague images of concepts, I think the core idea that these are attempts to turn subjective processes into objects for thought, usually so that you can weave a social narrative around them, remains true.
You may have missed a subtlety in my comment. In your grandparent, you said “people’s thoughts and words are a byproduct …”. In my comment, I suggested “Thoughts are at one with …”. I didn’t mention words.
If we are going to focus on words rather than thoughts, then I am more willing to accept your model. Spoken words are indeed behaviors—behaviors that purport to be accurate reports of thoughts, but probably are not.
Perhaps we should taboo “thought”, since we may not be intending the word to designate the same phenomenon.
What I meant was that if the intelligence part is utterly passive re the behavior, then I’m unsure as to how strong a metaphor it is for human behavior. Yes, we sometimes don’t know why we do things, or we have inaccurate models of ourselves as far as our explanations of why we do what we do. But the “intelligence part” having absolutely no affect on the actions?
But, perhaps all will be resolved in two weeks. :)
Does that “human level intelligence module” have any ability to actually control the robot’s actions, or just to passively observe and then ask “why did I do that?” What’re the rules of the game, as such, here?
I don’t think it’s saying anything too shocking to admit this is all a metaphor for people; I’m going to be pushing the view that people’s thoughts and words are a byproduct of the processes that determine behavior rather than directly controlling them. I anticipate providing at least a partial answer to your question in about two weeks; if that doesn’t satisfy you, let me know and we can talk about it then.
One that presents consciousness as an epiphenomenon. In the version of the robot that has human intelligence, you describe it as bolted on, experiencing the robot’s actions but having no causal influence on them, an impotent spectator.
Are your projected postings going to justify this hypothesis?
I hope so. Let’s see.
My first thought was that this was pointing towards an epiphenomenal view of consciousness. But I think it’s actually something more radical and more testable. Yvain, check me if I get this wrong, but I think you’re saying that “our conscious verbal acts—both internally and externally directed—do not primarily cause our actions.”
Here is an experiment to test this: have people perform some verbal act repeatedly, and see if it shifts their actions. This happens to be a well known motivational and behavior-alteration technique, beloved of football teams, political campaigns, governments, and religions, among other organizations. My impression is that it works to a point, but not consistently. Has anybody done a test of how catechisms, chants, and the like shape behavior?
I hope that he explicitly deals with this. By the way, I didn’t know the actual definition of epiphenomenon, which is ” is a secondary phenomenon that occurs alongside or in parallel to a primary phenomenon”.
But then again...
See also.
I take this to be an elliptical way of suggesting that Yvain is offering a false dichotomy in suggesting a choice between the notion of thoughts being in control of the processes determining behavior and the notion of thoughts being a byproduct of those processes.
I agree. Thoughts are at one with (are a subset of) the processes that determine behavior.
I’m not so sure. Using the analogy of a computer program, we could think of thoughts either as like the lines of code in the program (in which case they’re at one with, or in control of, the processes generating behavior, depending on how you want to look at it) or you could think of thoughts as like the status messages that print “Reticulating splines” or “50% complete” to the screen, in which case they’re byproducts of those processes (very specific, unnatural byproducts, to boot).
My view is closer to the latter; they’re a way of allowing the brain to make inferences about its own behavior and to communicate those inferences. Opaque processes decide to go to Subway tonight because they’ve heard it’s low calorie, then they produce the verbal sentence “I should go to Subway tonight because it’s low calorie”, and then when your friend asks you why you went to Subway, you say “Because it’s low calorie”).
The tendency of thoughts to appear in a conversational phrasing (“I think I’ll go to Subway tonight”) rather than something like “Dear Broca’s Area—Please be informed that we are going to Subway tonight, and adjust your verbal behavior accordingly—yours sincerely, the prefrontal cortex” is a byproduct of their use in conversation, not their internal function.
Right now I’m just asserting that this is a possibility and that it’s distinct from thoughts being part of the decision-making structure. I’ll try to give some evidence for it later.
If you make the old mistake of confusing thoughts in general with analytic, reflective, verbal, serial internal monologue, I’m going to be sad.
I find this rather alien. Some processes are opaque, but that kind definitely isn’t. Something (hunger, time, memory of previously made plans, whatever) triggers a reusable pick-a-sandwich-shop process; names and logos of nearby stores come up; associated emotions and concepts come up; weights associated to each shift—an image of those annoying health freaks who diet all the time upvotes “tasty” and downvotes “low calorie”; eventually they stabilize, create an image of myself going to Subway rather than somewhere else, and hand it over to motor control. If something gets stuck at any point, the process stops, a little alarm rings, and internal monologue turns to it to make it come unstuck. If not, there are no verbal thoughts at any point.
Probably time to start being sad; I’m mostly going to use “thoughts” that way. But I think what I’m talking about holds for any definition of “thought” where it’s a mental activity accessible to the conscious mind.
I recognize different people use internal monologue to a different degree than others, but whether you decide with a monologue or with vague images of concepts, I think the core idea that these are attempts to turn subjective processes into objects for thought, usually so that you can weave a social narrative around them, remains true.
You may have missed a subtlety in my comment. In your grandparent, you said “people’s thoughts and words are a byproduct …”. In my comment, I suggested “Thoughts are at one with …”. I didn’t mention words.
If we are going to focus on words rather than thoughts, then I am more willing to accept your model. Spoken words are indeed behaviors—behaviors that purport to be accurate reports of thoughts, but probably are not.
Perhaps we should taboo “thought”, since we may not be intending the word to designate the same phenomenon.
What I meant was that if the intelligence part is utterly passive re the behavior, then I’m unsure as to how strong a metaphor it is for human behavior. Yes, we sometimes don’t know why we do things, or we have inaccurate models of ourselves as far as our explanations of why we do what we do. But the “intelligence part” having absolutely no affect on the actions?
But, perhaps all will be resolved in two weeks. :)