My previous attempt at asking this question failed in a manner that confuses me greatly, so I’m going to attempt to repair the question.
Suppose I’m taking a math test. I see that one of the questions is “Find the derivative of 1/cos(x^2).” I conclude that I should find the derivative of 1/cos(x^2). I then go on to actually do so. What is it that causes me (specifically, the proximate cause, not the ultimate) to go from concluding that I should do something to attempting to do it?
I think you are asking the question that is a major theme of Hofstadter’s book: Godel, Escher,Bach. To be more specific he raises the question humorously on page 461 in the Birthday Cantatatatatat.… to motivate chapter XV: Jumping out of the System.
He returns to the question in Chapter XX and page 685 offers a quotable answer
There is no need to rely on “rules that permit you to apply rules”, because the lowest level rules—those without any “meta”’s infront—are embedded in the hardware, and they run without permission.
Another way to look at the problem is to ask what kind of life experiences would give you the anchors in reality to dissolve the question? What works for me is understanding how computers work from gate level to interpreters for high level languages. How does (eval ‘(eval ’(+ 2 2))) go from concluding it should evaluate (+ 2 2) to attempting to do it?
I conclude that I should find the derivative of 1/cos(x^2). I then go on to actually do so.
In what sense can you be said to conclude this? When I took tests, my mind went straight from reading questions to trying to answer them without stopping to consciously conclude anything. At no point was my attention fixed on what I should do; it was fixed on doing.
It is an interesting mental exercise, when you are about to do something but have not yet begun it, to try to introspectively perceive the moment of decision. I find it’s like trying to see the back of my own head.
That’s a good question, judging by the number and variety of replies.
I’d suggest that in a way, things go the other way around. Instead of your concluding you should do something causing you to do it, instead I think you are (already) aiming to do something, and that drives you to figure out what you should do. The urge to do causes figuring out what to do, rather than the figuring causing the doing.
But that’s a little over-simplified, as discovered by people trying to program robots that interact with the world. Deciding what to do at any given moment is distinctly non-trivial.
Indeed, that’s the conclusion I came to. What I wonder now is how we operant-condition ourselves without just reinforcing reinforcement itself. Which, I suppose, is more or less precisely what the Friendly AI problem is.
The Perceptual Control Theory crowd here (pjeby, RichardKennaway, Kaj) will probably respond with some kind of blackbox control systems model.
I don’t have a complete answer, but I can tell you what form it takes.
The quantum states in your body become entangled with a new Everett branch, branches being weighted by the Bohr probabilities. This is what your choice to find the derivative (or not) feels like. These new, random values get filtered through the rest of your architecture into coherent action, as opposed to the seizure you would have if this randomness were not somehow filtered.
I know, not much at the nuts-and-bolts level, but I hope that provides a good sketch.
In a deterministic classical universe, all can be the same for minds and beliefs and decisions as it is in our world. Any good argument should generalize there.
“Entanglement” is the black box there, and PCT, as set out in the materials I’ve linked to in past posts, is the general form the real answer will take.
The more general answer, but too general to be of practical use, is the one that several people have given already. At some point the hardware bottoms out in doing the task instead of thinking about it.
My previous attempt at asking this question failed in a manner that confuses me greatly, so I’m going to attempt to repair the question.
Suppose I’m taking a math test. I see that one of the questions is “Find the derivative of 1/cos(x^2).” I conclude that I should find the derivative of 1/cos(x^2). I then go on to actually do so. What is it that causes me (specifically, the proximate cause, not the ultimate) to go from concluding that I should do something to attempting to do it?
I think you are asking the question that is a major theme of Hofstadter’s book: Godel, Escher,Bach. To be more specific he raises the question humorously on page 461 in the Birthday Cantatatatatat.… to motivate chapter XV: Jumping out of the System.
He returns to the question in Chapter XX and page 685 offers a quotable answer
Another way to look at the problem is to ask what kind of life experiences would give you the anchors in reality to dissolve the question? What works for me is understanding how computers work from gate level to interpreters for high level languages. How does (eval ‘(eval ’(+ 2 2))) go from concluding it should evaluate (+ 2 2) to attempting to do it?
In what sense can you be said to conclude this? When I took tests, my mind went straight from reading questions to trying to answer them without stopping to consciously conclude anything. At no point was my attention fixed on what I should do; it was fixed on doing.
What kind of answer do you expect? For example, the obvious answer is “the algorithm implemented in your mind causes that to happen”.
It is an interesting mental exercise, when you are about to do something but have not yet begun it, to try to introspectively perceive the moment of decision. I find it’s like trying to see the back of my own head.
That’s a good question, judging by the number and variety of replies.
I’d suggest that in a way, things go the other way around. Instead of your concluding you should do something causing you to do it, instead I think you are (already) aiming to do something, and that drives you to figure out what you should do. The urge to do causes figuring out what to do, rather than the figuring causing the doing.
But that’s a little over-simplified, as discovered by people trying to program robots that interact with the world. Deciding what to do at any given moment is distinctly non-trivial.
At the risk of providing a non-answer I’ll say: Operant conditioning.
The test problem, the solving of it, and getting an answer correspond to a light coming on, pressing a lever, and getting food.
We’ve long since been trained that solving problems in that context build up token points that will pay out later in praise and promises of money.
Presumably this training translates fairly well to real world problems.
Indeed, that’s the conclusion I came to. What I wonder now is how we operant-condition ourselves without just reinforcing reinforcement itself. Which, I suppose, is more or less precisely what the Friendly AI problem is.
The Perceptual Control Theory crowd here (pjeby, RichardKennaway, Kaj) will probably respond with some kind of blackbox control systems model.
I don’t have a complete answer, but I can tell you what form it takes.
The quantum states in your body become entangled with a new Everett branch, branches being weighted by the Bohr probabilities. This is what your choice to find the derivative (or not) feels like. These new, random values get filtered through the rest of your architecture into coherent action, as opposed to the seizure you would have if this randomness were not somehow filtered.
I know, not much at the nuts-and-bolts level, but I hope that provides a good sketch.
In a deterministic classical universe, all can be the same for minds and beliefs and decisions as it is in our world. Any good argument should generalize there.
“Entanglement” is the black box there, and PCT, as set out in the materials I’ve linked to in past posts, is the general form the real answer will take.
The more general answer, but too general to be of practical use, is the one that several people have given already. At some point the hardware bottoms out in doing the task instead of thinking about it.