So if a programmable thermostat turns the heat on when the temperature drops below 72 degrees F, whether that’s a decision or not depends on whether its internal structure is a model of the “does the heat go on?” problem, whether its set-point is a value to consider, and so forth. Perhaps reasonable people can disagree on that, and perhaps they can’t, but in any case if I turn the heat on when the temperature drops below 72 degrees F most reasonable people would agree that my brain has models and values and so forth, and therefore that I have made a decision.
The thermostat doesn’t model the problem. The engineer who designed the thermostat modeled the problem, and the thermostat’s gauge is a physical manifestation of the engineer’s model.
It’s in the same sense that I don’t decide to be hungry—I just am.
Combining that assertion with your earlier one, I get the claim that the thermostat’s turning the heat on is a decision, since the causal chain that goes into it involves modeling the problem, but it isn’t the thermostat’s decision, but rather the designer’s decision. Or, well, partially the designer’s. Presumably, since I set the thermostat’s set-point, it’s similarly not the thermostat’s values which the causal chain involves, but mine. So it’s a decision being made collectively by me and the engineer, I guess. Perhaps some other agents, depending on what “things like that” subsumes.
This seems like an odd way to talk about the situation, but not a fatally odd way.
By the causal chain that goes into it. Does it involve modeling the problem and considering values and things like that?
So if a programmable thermostat turns the heat on when the temperature drops below 72 degrees F, whether that’s a decision or not depends on whether its internal structure is a model of the “does the heat go on?” problem, whether its set-point is a value to consider, and so forth. Perhaps reasonable people can disagree on that, and perhaps they can’t, but in any case if I turn the heat on when the temperature drops below 72 degrees F most reasonable people would agree that my brain has models and values and so forth, and therefore that I have made a decision.
(nods) OK, that’s fair. I can live with that.
The thermostat doesn’t model the problem. The engineer who designed the thermostat modeled the problem, and the thermostat’s gauge is a physical manifestation of the engineer’s model.
It’s in the same sense that I don’t decide to be hungry—I just am.
ETA: Dangit, I could use a sandwich.
Combining that assertion with your earlier one, I get the claim that the thermostat’s turning the heat on is a decision, since the causal chain that goes into it involves modeling the problem, but it isn’t the thermostat’s decision, but rather the designer’s decision.
Or, well, partially the designer’s.
Presumably, since I set the thermostat’s set-point, it’s similarly not the thermostat’s values which the causal chain involves, but mine.
So it’s a decision being made collectively by me and the engineer, I guess.
Perhaps some other agents, depending on what “things like that” subsumes.
This seems like an odd way to talk about the situation, but not a fatally odd way.