We know that physics does not support the idea of metaphysical free will. By metaphysical free will I mean the magical ability of agents to change the world by just making a decision to do so.
According to my understanding of the ordinary, everyday, non-magical meanings of the words “decide”, “act”, “change”, etc., we do these things all the time. So do autonomous vehicles, for that matter. So do cats and dogs. Intention, choice, and steering the world into desired configurations are what we do, as do some of our machines.
It is strange that people are so ready to deny these things to people, when they never make the same arguments about machines. Instead, for example, they want to know what a driverless car saw and decided when it crashed, or protest that engine control software detected when it was under test and tuned the engine to misleadingly pass the emissions criteria. And of course there is a whole mathematical field called “decision theory”. It’s about decisions.
After all, what’s the point of making decisions if you are just a passenger spinning a fake steering wheel not attached to any actual wheels?
The simile contradicts your argument, which implies that there is no such thing as a steering wheel. But there is. Real steering wheels, that the real driver of a real car uses to really steer it. Are the designers and manufacturers of steering wheels wasting their efforts?
The answer is the usual compatibilism one: we are compelled to behave as if we were making decisions by our built-in algorithm.
Now that’s magic — to suppose that our beliefs are absolutely groundless, yet some compelling force maintains them in alignment with reality.
According to my understanding of the ordinary, everyday, non-magical meanings of the words “decide”, “act”, “change”, etc., we do these things all the time.
We perceive the world as if we were intentionally doing them, yes. But there is no “top-down causation” in physics that supports this view. And our perspective on agency depends on how much we know about the “agent”: the more we know, the less agenty the entity feels. It’s a known phenomenon. I mentioned it before a couple of times, including here and on my blog.
The same argument that “we” do not “do” things, also shows that there is no such thing as a jumbo jet, no such thing as a car, not even any such thing as an atom; that nothing made of parts exists. We thought protons were elementary particles, until we discovered quarks. But no: according to this view “we” did not “think” anything, because “we” do not exist and we do not “think”. Nobody and nothing exists.
All that such an argument does is redefine the words “thing” and “exist” in ways that no-one has ever used them and no-one ever consistently could. It fails to account for the fact that the concepts work.
You say that agency is bugs and uncertainty, that its perception is an illusion stemming from ignorance; I say that agency is controlsystems, a real thing that can be experimentally detected in both living organisms and some machines, and detected to be absent in other things.
The same argument that “we” do not “do” things, also shows that there is no such thing as a jumbo jet, no such thing as a car, not even any such thing as an atom; that nothing made of parts exists.
and
It fails to account for the fact that the concepts work.
Actually, using the concepts that work is the whole point of my posts on LW, as opposed to using the concepts that feel right. I dislike the terms like “exist” as pointing to some objective reality, and this is where I part ways with Eliezer. To me it is “models all the way down.” Here is another post on this topic from a few years back: Mathematics as a lossy compression algorithm gone wild. Once you consciously replace “true” with “useful” and “exist” with “usefully modeled as,” a lot of confusion over what exists and what does not, what is true and what is false, what is knowledge and what it belief, what is objective and what is subjective, simply melts away. In this vein, it is very much useful to model a car as a car, not as a transient spike in quantum fields. In the same vein, it is useful to model the electron scattering through double slits as a transient spike in quantum fields, and not as a tiny ping-ping ball that can sometimes turn into a wave.
I say that agency is controlsystems, a real thing that can be experimentally detected in both living organisms and some machines, and detected to be absent in other things.
I agree that a lot of agent-looking behavior can be usefully modeled as a multi-level control system, and, if anything, this is not done enough in biology, neuroscience or applied philosophy, if the latter is even a thing. By the same token, the control system approach is a useful abstraction for many observed phenomena, living or otherwise, not just agents. It does not lay claim to what an agent is, just what approach can be used to describe some agenty behaviors. I see absolutely no contradiction with what I said here or elsewhere.
Maybe one way to summarize my point in this post is that modeling the decisions as learning about oneself and the world is more useful for making good decisions that modeling an agent as changing the world with her decisions.
Actually, using the concepts that work is the whole point of my posts on LW, as opposed to using the concepts that feel right.
It seems to me that the concepts “jumbo jet”, “car”, and “atom” all work. If they “feel right”, it is because they work. “Feeling right” is not some free-floating attribute to be bestowed at will on this or that.
A telling phrase in the post you linked is “for some reason”:
In yet other words, a good approximation is, for some reason, sometimes also a good extrapolation.
Unless you can expand on that “some reason”, this is just pushing under the carpet the fact that certain things work spectacularly well, and leaving Wigner’s question unanswered.
Maybe one way to summarize my point in this post is that modeling the decisions as learning about oneself and the world is more useful for making good decisions that modeling an agent as changing the world with her decisions.
Thought and action are two different things, as different as a raven and a writing desk.
Will only reply to one part, to highlight our basic (ontological?) differences:
Thought and action are two different things, as different as a raven and a writing desk.
A thought is a physical process in the brain, which is a part of the universe. An action is also a physical process in the universe, so it is very much like a thought, only more visible to those without predictive powers.
If choice and counterfactuals exist, then an action is something that can affect the future, while a thought is not. Of course, that difference no longer applies if your ontology doesn’t feature choices and countefactuals...
Will only reply to one part, to highlight our basic (ontological?) differences:
What your ontology should be is “nothing” or “mu”. You are not keeping up to your commitments.
We seem to have very different ontologies here, and not converging. Also, telling me what my ontology “should” be is less than helpful :) It helps to reach mutual understanding before giving prescriptions to the other person. Assuming you are interested in more understanding, and less prescribing, let me try again to explain what I mean.
If choice and counterfactuals exist, then an action is something that can affect the future, while a thought is not. Of course, that difference no longer applies if your ontology doesn’t feature choices and countefactuals…
In the view I am describing here “choice” is one of the qualia, a process in the brain. Counterfactuals is another, related, quale, the feeling of possibilities. Claiming anything more is a mind projection fallacy. The mental model of the world changes with time. I am not even claiming that time passes, just that there is a mental model of the universe, including the counterfactuals, for each moment in the observer’s time. I prefer the term “observer” to agent, since it does not imply having a choice, only watching the world (as represented by the observer’s mental model) unfold.
And very different epistemologies. I am not denying the very possibility of knowing things about reality.
and not converging. Also, telling me what my ontology “should” be is less than helpful :) It helps to reach mutual understanding before giving prescriptions to the other person.
All I am doing is taking you at your word.
You keep saying that it is models all the way down, and there is no way to make true claims about reality. If I am not to take those comments literally, how am I to take them? How am I to guess the correct non-literal interpretation, out of the many possible ones.?
In the view I am describing here “choice” is one of the qualia, a process in the brain. Counterfactuals is another, related, quale, the feeling of possibilities. Claiming anything more is a mind projection fallacy.
That’s an implicit claim about reality. Something can only be a a mind projection if there is nothing in reality corresponding to it. It is not sufficient to say that it is in the head or the model, it also has to not be in the territory, or else it is a true belief, not a mind projection.. To say that something doesn’t exist in reality is to make a claim about reality as much as to say that something does.
The mental model of the world changes with time. I am not even claiming that time passes, just that there is a mental model of the universe, including the counterfactuals, for each moment in the observer’s time.
Again “in the model” does not imply “not in the territory”.
I dislike the terms like “exist” as pointing to some objective reality,
You seem happy enough with “not exist” as in “agents, counterfactuals and choices don’t exist”
Once you consciously replace “true” with “useful” and “exist” with “usefully modeled as,” a lot of confusion over what exists and and what does not, what is true and what is false, what is knowledge and what it belief, what is objective and what is subjective, simply melts away.
If it is really possible for an agent to affect the future or street themselves into alternative futures, then there is a lot of potential utility in it, in that you can end up in a higher-utility future than you would otherwise have. OTOH, if there are no counterfactuals, then whatever utility you gain is predetermined. So one cannot assess the usefulness, in the sense of utility gain, of models, in a way independent of the metaphysics of determinism and counterfactuals. What is useful, and how useful is, depends on what is true.
I agree that a lot of agent-looking behavior can be usefully modeled as a multi-level control system, and, if anything, this is not done enough in biology, neuroscience or applied philosophy, if the latter is even a thing. By the same token, the control system approach is a useful abstraction for many observed phenomena, living or otherwise, not just agents. It does not lay claim to what an agent is, just what approach can be used to describe some agenty behaviors. I see absolutely no contradiction with what I said here or elsewhere.
It contradicts the “agents don’t exist thing” and the “I never talk about existence thing”. If you only objective to reductively inexplicable agents, that would be better expressed as “there is nothing nonreductive”.
Although that still wouldn’t help you come to the conclusion that there is no choice and no counterfactuals, because that is much more about determinism than reductionism.
If it is really possible for an agent to affect the future or street themselves into alternative futures, then there is a lot of potential utility in it, in that you can end up in a higher-utility future than you would otherwise have. OTOH, if there are no counterfactuals, then whatever utility you gain is predetermined.
Yep, some possible worlds have more utility for a given agent than others. And, yes, sort of. Whatever utility you gain is not your free choice, and not necessarily predetermined, just not under your control. You are a mere observer who thinks they can change the world.
It contradicts the “agents don’t exist thing” and the “I never talk about existence thing”.
I don’t see how. Seems there is an inferential gap there we haven’t bridged.
Once you consciously replace “true” with “useful” and “exist” with “usefully modeled as,” a lot of confusion over what exists and what does not, what is true and what is false, what is knowledge and what it belief, what is objective and what is subjective, simply melts away.
How do you know that the people who say “agents exist” don’t mean “some systems can be usefully modelled as agents”?
By the same token, the control system approach is a useful abstraction for many observed phenomena, living or otherwise, not just agents. It does not lay claim to what an agent is, just what approach can be used to describe some agenty behaviors. I see absolutely no contradiction with what I said here or elsewhere.
You are making a claim about reality, that counterfactuals don’t exist., even though you are also making a meta claim that you don’t make claims about reality.
If probablistic agents[], and counterfactuals are both useful models (and I don’t see how you can consistentlt assert the former and deny the latter) then counterfactuals “exist” by your* lights.
[*] Or automaton, if you prefer. If someone builds a software gismo that is probablistic and acts without specific instruction, then it is an agetn and an automaton all at the same time.
I agree, the apparent emergent high-level structures look awfully like agents. That intentional stance tends to dissipate once we understand them more.
If intentionality just mean seeking to pursue or maximise some goal, there is no reason an artificial system should not have it. But the answer is different if intentionality means having a ghost or homunculus inside. And neither is the same as the issue of whether an agent is deterministic , or capable of changing the future.
Even when the agent has more compute than we do? I continue to take the intentional stance towards agents I understand but can’t compute, like MCTS-based chess players.
I would model the program as a thing that is optimizing for a goal. While I might know something about the program’s weaknesses, I primarily model it as a thing that selects good chess moves. Especially if it is a better chess player than I am.
According to my understanding of the ordinary, everyday, non-magical meanings of the words “decide”, “act”, “change”, etc., we do these things all the time. So do autonomous vehicles, for that matter. So do cats and dogs. Intention, choice, and steering the world into desired configurations are what we do, as do some of our machines.
It is strange that people are so ready to deny these things to people, when they never make the same arguments about machines. Instead, for example, they want to know what a driverless car saw and decided when it crashed, or protest that engine control software detected when it was under test and tuned the engine to misleadingly pass the emissions criteria. And of course there is a whole mathematical field called “decision theory”. It’s about decisions.
The simile contradicts your argument, which implies that there is no such thing as a steering wheel. But there is. Real steering wheels, that the real driver of a real car uses to really steer it. Are the designers and manufacturers of steering wheels wasting their efforts?
Now that’s magic — to suppose that our beliefs are absolutely groundless, yet some compelling force maintains them in alignment with reality.
See also: Hyakujo’s fox.
We perceive the world as if we were intentionally doing them, yes. But there is no “top-down causation” in physics that supports this view. And our perspective on agency depends on how much we know about the “agent”: the more we know, the less agenty the entity feels. It’s a known phenomenon. I mentioned it before a couple of times, including here and on my blog.
“The sage is one with causation.”
The same argument that “we” do not “do” things, also shows that there is no such thing as a jumbo jet, no such thing as a car, not even any such thing as an atom; that nothing made of parts exists. We thought protons were elementary particles, until we discovered quarks. But no: according to this view “we” did not “think” anything, because “we” do not exist and we do not “think”. Nobody and nothing exists.
All that such an argument does is redefine the words “thing” and “exist” in ways that no-one has ever used them and no-one ever consistently could. It fails to account for the fact that the concepts work.
You say that agency is bugs and uncertainty, that its perception is an illusion stemming from ignorance; I say that agency is control systems, a real thing that can be experimentally detected in both living organisms and some machines, and detected to be absent in other things.
and
Actually, using the concepts that work is the whole point of my posts on LW, as opposed to using the concepts that feel right. I dislike the terms like “exist” as pointing to some objective reality, and this is where I part ways with Eliezer. To me it is “models all the way down.” Here is another post on this topic from a few years back: Mathematics as a lossy compression algorithm gone wild. Once you consciously replace “true” with “useful” and “exist” with “usefully modeled as,” a lot of confusion over what exists and what does not, what is true and what is false, what is knowledge and what it belief, what is objective and what is subjective, simply melts away. In this vein, it is very much useful to model a car as a car, not as a transient spike in quantum fields. In the same vein, it is useful to model the electron scattering through double slits as a transient spike in quantum fields, and not as a tiny ping-ping ball that can sometimes turn into a wave.
I agree that a lot of agent-looking behavior can be usefully modeled as a multi-level control system, and, if anything, this is not done enough in biology, neuroscience or applied philosophy, if the latter is even a thing. By the same token, the control system approach is a useful abstraction for many observed phenomena, living or otherwise, not just agents. It does not lay claim to what an agent is, just what approach can be used to describe some agenty behaviors. I see absolutely no contradiction with what I said here or elsewhere.
Maybe one way to summarize my point in this post is that modeling the decisions as learning about oneself and the world is more useful for making good decisions that modeling an agent as changing the world with her decisions.
It seems to me that the concepts “jumbo jet”, “car”, and “atom” all work. If they “feel right”, it is because they work. “Feeling right” is not some free-floating attribute to be bestowed at will on this or that.
A telling phrase in the post you linked is “for some reason”:
Unless you can expand on that “some reason”, this is just pushing under the carpet the fact that certain things work spectacularly well, and leaving Wigner’s question unanswered.
Thought and action are two different things, as different as a raven and a writing desk.
Will only reply to one part, to highlight our basic (ontological?) differences:
A thought is a physical process in the brain, which is a part of the universe. An action is also a physical process in the universe, so it is very much like a thought, only more visible to those without predictive powers.
If choice and counterfactuals exist, then an action is something that can affect the future, while a thought is not. Of course, that difference no longer applies if your ontology doesn’t feature choices and countefactuals...
What your ontology should be is “nothing” or “mu”. You are not keeping up to your commitments.
We seem to have very different ontologies here, and not converging. Also, telling me what my ontology “should” be is less than helpful :) It helps to reach mutual understanding before giving prescriptions to the other person. Assuming you are interested in more understanding, and less prescribing, let me try again to explain what I mean.
In the view I am describing here “choice” is one of the qualia, a process in the brain. Counterfactuals is another, related, quale, the feeling of possibilities. Claiming anything more is a mind projection fallacy. The mental model of the world changes with time. I am not even claiming that time passes, just that there is a mental model of the universe, including the counterfactuals, for each moment in the observer’s time. I prefer the term “observer” to agent, since it does not imply having a choice, only watching the world (as represented by the observer’s mental model) unfold.
And very different epistemologies. I am not denying the very possibility of knowing things about reality.
All I am doing is taking you at your word.
You keep saying that it is models all the way down, and there is no way to make true claims about reality. If I am not to take those comments literally, how am I to take them? How am I to guess the correct non-literal interpretation, out of the many possible ones.?
That’s an implicit claim about reality. Something can only be a a mind projection if there is nothing in reality corresponding to it. It is not sufficient to say that it is in the head or the model, it also has to not be in the territory, or else it is a true belief, not a mind projection.. To say that something doesn’t exist in reality is to make a claim about reality as much as to say that something does.
Again “in the model” does not imply “not in the territory”.
You seem happy enough with “not exist” as in “agents, counterfactuals and choices don’t exist”
If it is really possible for an agent to affect the future or street themselves into alternative futures, then there is a lot of potential utility in it, in that you can end up in a higher-utility future than you would otherwise have. OTOH, if there are no counterfactuals, then whatever utility you gain is predetermined. So one cannot assess the usefulness, in the sense of utility gain, of models, in a way independent of the metaphysics of determinism and counterfactuals. What is useful, and how useful is, depends on what is true.
It contradicts the “agents don’t exist thing” and the “I never talk about existence thing”. If you only objective to reductively inexplicable agents, that would be better expressed as “there is nothing nonreductive”.
Although that still wouldn’t help you come to the conclusion that there is no choice and no counterfactuals, because that is much more about determinism than reductionism.
Yep, some possible worlds have more utility for a given agent than others. And, yes, sort of. Whatever utility you gain is not your free choice, and not necessarily predetermined, just not under your control. You are a mere observer who thinks they can change the world.
I don’t see how. Seems there is an inferential gap there we haven’t bridged.
That’s a statement about the world. Care to justify it?
How do you know that the people who say “agents exist” don’t mean “some systems can be usefully modelled as agents”?
You are making a claim about reality, that counterfactuals don’t exist., even though you are also making a meta claim that you don’t make claims about reality.
If probablistic agents[], and counterfactuals are both useful models (and I don’t see how you can consistentlt assert the former and deny the latter) then counterfactuals “exist” by your* lights.
[*] Or automaton, if you prefer. If someone builds a software gismo that is probablistic and acts without specific instruction, then it is an agetn and an automaton all at the same time.
There is no full strength top-down determinism, but systems-level behaviour is enough to support a common-sense view of decision making.
I agree, the apparent emergent high-level structures look awfully like agents. That intentional stance tends to dissipate once we understand them more.
If intentionality just mean seeking to pursue or maximise some goal, there is no reason an artificial system should not have it. But the answer is different if intentionality means having a ghost or homunculus inside. And neither is the same as the issue of whether an agent is deterministic , or capable of changing the future.
More precision is needed.
Even when the agent has more compute than we do? I continue to take the intentional stance towards agents I understand but can’t compute, like MCTS-based chess players.
What do you mean by taking the intentional stance in this case?
I would model the program as a thing that is optimizing for a goal. While I might know something about the program’s weaknesses, I primarily model it as a thing that selects good chess moves. Especially if it is a better chess player than I am.
See: Goal inference as inverse planning.