Let me point out that just because I know the appropriate part of the Sequences does not necessarily mean I agree with it.
Ah, ok—sorry. The materialist, dissolved view of free will related questions has been a strongly held view of mine since a very young age, so my prior for a person who is aware of thesel yet subscribes to what I’ll call the “naive view” for lack of the better word is very low.
It’s not really the particulars of the sequences here which are in question—the people who say free will doesn’t exist, and the people who say it does but redefine free will in funny ways, the pan-psychists, the compatiblists and non-compatiblists, all share in common a non-dualist view which does not allow them to label the search engine’s processes and the human’s processes as fundamentally, qualitatively different processes. This is a deep philosophical divide that has been debated for, as far as I am aware, at least two thousand years.
As a practical matter, speaking about choices of light switches seems silly. Given this, I don’t see why speaking about choices of search engines is not silly.
By analogy, speaking of choices of humans seems silly, since humans are made of the same basic laws.
The fundamental disagreement here runs rather deeply—it’s not going to be possible to talk about this without diving into free will.
If I understood the causal mechanisms underlying the actions of humans as well as I do those underlying lightswitches, talking about the former as “choices” would seem as silly to me as talking that way about the latter does.
But I don’t, so it doesn’t.
I assume you don’t understand the causal mechanisms underlying the actions of humans either. So why does talking about them as “choices” seem silly to you?
I agree with you. Whether we model something as an agent or an object is a feature of our map, not the territory. It’s not useful to model light switches as agents because they are too simple, and looking at them through the lens of preferences is not simple or informative. Meanwhile, it is useful to model humans partially as preference maximizing agents to make approximations.
However, in the context of the larger discussion, I interpret Lumifer as treating the distinction between “choice” and “event” as a feature of the territory itself, and positing a fundamental qualitative difference between a “choice” and other sorts of events. My reply should be seen as an assertion that such qualitative differences are not features of the map—if it’s impossible to model a light switch as having choices, then it’s also impossible to model a human as having choices. (My actual belief is that it’s possible to model both as having choices or not having them)
Is your actual belief that there are equivalent grounds for modeling both either way?
If so, I disagree… from my own perspective, modeling people as preference-maximizing agents is significantly more justified (due to differences in the territory) than modeling a light switch that way.
If not, to what do you attribute the differential?
Is your actual belief that there are equivalent grounds for modeling both either way?
...it is possible to model things either way, but it is more useful for some objects than others.
It’s not useful to model light switches as agents because they are too simple, and looking at them through the lens of preferences is not simple or informative. Meanwhile, it is useful to model humans partially as preference maximizing agents to make approximations.
Modeling an object as agents is useful when the object exhibits a pattern of behavior which is roughly consistent with preference maximizing. A search engine is well modeled as an agent. A human is very well modeled as an agent.
A light switch is very poorly modeled as an agent. Thinking of it in terms of preference pattern doesn’t make it any easier to predict its behavior. But you can model it as an agent, if you’d like.
I am willing to adopt “useful” in place of “justified” if it makes this conversation easier. In which case my question could be rephrased “Is it equally useful to model both either way?”
To which your answer seems to be no… it’s more useful to model a human as an agent than it is a light-switch. (I’m inferring, because despite introducing the “useful” language, what you actually say instead introduces the language of something being “well-modeled.” But I’m assuming that by “well-modeled” you mean “useful.”)
And your answer to the followup question is because the pattern of behavior of a light switch is different from that of a search engine or a human, such that adopting an intentional stance towards the former doesn’t make it easier to predict.
Yup. Modeling something as a preference maximizing agent is generally useful to adopt for things which systematically behave in ways that maximize certain outcomes in a diverse array of situations. It allows you to make accurate predictions even when you don’t fully understand the mechanics of the events that occur in generating the events you are predicting.
(I distinguished useful and justified because I wasn’t sure if “justified” had moral connotations in your usage)
Edit: On reading the wiki, I tend to agree with the views that the wiki attributes to Dennett. Thanks for the reference and the word “intentional stance”.
OK. So, having clarified that, I return to your initial comment:
The distinction you are making between the input-output function of a human as a “choice” vs. the input-output of a machine as “not-a-choice” sounds very reminiscent of the traditional naive / confused model of free will that people commonly have before dissolving the question
...and am as puzzled by it as I was in the first place.
You agree that the input-output function of a human differs from the input-output of a machine like a light switch in ways that make it more useful to model the former but not the latter as maximizing preferences. (To adopt the intentional stance towards the former and the design stance towards the latter, in Dennett’s terminology.)
So, given that, what is your objection to Lumifer’s distinction? “Choice” seems like a perfectly reasonable word to use when taking an intentional stance, and to not use when taking a design stance.
When I asked earlier, you explained that your objection had to do with attributing “territory-level” differences to humans and machines, when it’s really a “map-level” objection… that it’s possible to talk about a light-switch’s choices, or not talk about a human’s choices, so it’s not really a difference in the system at all, just a difference in the speaker.
But given that you agree that there’s a salient “territory-level” difference between the two systems (specifically, the differences which make the intentional stance more useful than the design stance wrt humans, but not wrt light-switches), I don’t quite get the objection. Sure, it’s possible to take either stance towards either system, but it’s more useful to take the intentional stance towards humans, and that’s a “fact about the territory.”
Because in the preceding comment, I was demonstrating that we should not morally care about light switches, search engines, and paperclippers...whereas we should morally care about fishes, dogs, and humans… because of differences in the preference profiles of these beings when they are modeled as agents.
Peter Hurford disagreed with me on the non-moral status of the paper-clipper. I was demonstrating the non-moral status of a being which cared only for paper clips by analogy to a search engine (a being which only cares about bringing up the best search result).
Whereas what Lumifer was saying is that the very premise that a search engine could have choices was fundamentally flawed (which, if true, would cause the whole analogy to break down).
The thing is, it’s not fundamentally flawed to thing of a search engine as having choices. Sure, search engines are a little less usefully modeled as agent-like when compared to humans, but it’s just a matter of degree.
the input-output function of a human as a “choice” vs. the input-output of a machine as “not-a-choice”
I was objecting to his hard, qualitative binary, not your and Dennet’s soft/qualitative spectrum.
Ah, ok—sorry. The materialist, dissolved view of free will related questions has been a strongly held view of mine since a very young age, so my prior for a person who is aware of thesel yet subscribes to what I’ll call the “naive view” for lack of the better word is very low.
It’s not really the particulars of the sequences here which are in question—the people who say free will doesn’t exist, and the people who say it does but redefine free will in funny ways, the pan-psychists, the compatiblists and non-compatiblists, all share in common a non-dualist view which does not allow them to label the search engine’s processes and the human’s processes as fundamentally, qualitatively different processes. This is a deep philosophical divide that has been debated for, as far as I am aware, at least two thousand years.
By analogy, speaking of choices of humans seems silly, since humans are made of the same basic laws.
The fundamental disagreement here runs rather deeply—it’s not going to be possible to talk about this without diving into free will.
Philosophical disagreements aside, that doesn’t seem to be a good way to construct priors for other people’s views.
If I understood the causal mechanisms underlying the actions of humans as well as I do those underlying lightswitches, talking about the former as “choices” would seem as silly to me as talking that way about the latter does.
But I don’t, so it doesn’t.
I assume you don’t understand the causal mechanisms underlying the actions of humans either. So why does talking about them as “choices” seem silly to you?
I agree with you. Whether we model something as an agent or an object is a feature of our map, not the territory. It’s not useful to model light switches as agents because they are too simple, and looking at them through the lens of preferences is not simple or informative. Meanwhile, it is useful to model humans partially as preference maximizing agents to make approximations.
However, in the context of the larger discussion, I interpret Lumifer as treating the distinction between “choice” and “event” as a feature of the territory itself, and positing a fundamental qualitative difference between a “choice” and other sorts of events. My reply should be seen as an assertion that such qualitative differences are not features of the map—if it’s impossible to model a light switch as having choices, then it’s also impossible to model a human as having choices. (My actual belief is that it’s possible to model both as having choices or not having them)
Is your actual belief that there are equivalent grounds for modeling both either way?
If so, I disagree… from my own perspective, modeling people as preference-maximizing agents is significantly more justified (due to differences in the territory) than modeling a light switch that way.
If not, to what do you attribute the differential?
...it is possible to model things either way, but it is more useful for some objects than others.
Modeling an object as agents is useful when the object exhibits a pattern of behavior which is roughly consistent with preference maximizing. A search engine is well modeled as an agent. A human is very well modeled as an agent.
A light switch is very poorly modeled as an agent. Thinking of it in terms of preference pattern doesn’t make it any easier to predict its behavior. But you can model it as an agent, if you’d like.
By “justified” do you mean “useful”?
I am willing to adopt “useful” in place of “justified” if it makes this conversation easier. In which case my question could be rephrased “Is it equally useful to model both either way?”
To which your answer seems to be no… it’s more useful to model a human as an agent than it is a light-switch. (I’m inferring, because despite introducing the “useful” language, what you actually say instead introduces the language of something being “well-modeled.” But I’m assuming that by “well-modeled” you mean “useful.”)
And your answer to the followup question is because the pattern of behavior of a light switch is different from that of a search engine or a human, such that adopting an intentional stance towards the former doesn’t make it easier to predict.
Have I understood you correctly?
Yup. Modeling something as a preference maximizing agent is generally useful to adopt for things which systematically behave in ways that maximize certain outcomes in a diverse array of situations. It allows you to make accurate predictions even when you don’t fully understand the mechanics of the events that occur in generating the events you are predicting.
(I distinguished useful and justified because I wasn’t sure if “justified” had moral connotations in your usage)
Edit: On reading the wiki, I tend to agree with the views that the wiki attributes to Dennett. Thanks for the reference and the word “intentional stance”.
OK. So, having clarified that, I return to your initial comment:
...and am as puzzled by it as I was in the first place.
You agree that the input-output function of a human differs from the input-output of a machine like a light switch in ways that make it more useful to model the former but not the latter as maximizing preferences. (To adopt the intentional stance towards the former and the design stance towards the latter, in Dennett’s terminology.)
So, given that, what is your objection to Lumifer’s distinction? “Choice” seems like a perfectly reasonable word to use when taking an intentional stance, and to not use when taking a design stance.
When I asked earlier, you explained that your objection had to do with attributing “territory-level” differences to humans and machines, when it’s really a “map-level” objection… that it’s possible to talk about a light-switch’s choices, or not talk about a human’s choices, so it’s not really a difference in the system at all, just a difference in the speaker.
But given that you agree that there’s a salient “territory-level” difference between the two systems (specifically, the differences which make the intentional stance more useful than the design stance wrt humans, but not wrt light-switches), I don’t quite get the objection. Sure, it’s possible to take either stance towards either system, but it’s more useful to take the intentional stance towards humans, and that’s a “fact about the territory.”
No?
Because in the preceding comment, I was demonstrating that we should not morally care about light switches, search engines, and paperclippers...whereas we should morally care about fishes, dogs, and humans… because of differences in the preference profiles of these beings when they are modeled as agents.
Peter Hurford disagreed with me on the non-moral status of the paper-clipper. I was demonstrating the non-moral status of a being which cared only for paper clips by analogy to a search engine (a being which only cares about bringing up the best search result).
Whereas what Lumifer was saying is that the very premise that a search engine could have choices was fundamentally flawed (which, if true, would cause the whole analogy to break down).
The thing is, it’s not fundamentally flawed to thing of a search engine as having choices. Sure, search engines are a little less usefully modeled as agent-like when compared to humans, but it’s just a matter of degree.
I was objecting to his hard, qualitative binary, not your and Dennet’s soft/qualitative spectrum.
Thanks for clarifying.