You appear to have been projecting entirely different arguments and thesis on to me, and posting links to articles whose conclusions I appear to be more in line with than you are—again, as far as I can tell.
That’s why philosophy is such a bog, and why it’s necessary to arrive at however insignificant but technical conclusions in order to move forward reliably.
I chose the articles in the comment above because they were in surface-match with what you are talking about, as a potential point on establishing understanding. I asked basically how you can characterize your agreement/disagreement with them, and how it carries over to the preference debate.
I asked basically how you can characterize your agreement/disagreement with them, and how it carries over to the preference debate.
And I answered that I agree with them, and that I considered it foundational material to what I’m talking about.
That’s why philosophy is such a bog, and why it’s necessary to arrive at however insignificant but technical conclusions in order to move forward reliably.
Indeed, which is why I’d now like to have the answer to my question, please. What definition of “preferences” are you using, such that an alarm system, thermostat, and human all have them? (Since this is not the common, non-metaphorical usage of “preference”.)
Preference is order on the lotteries of possible worlds (ideally established by expected utility), usually with agent a part of the world. Computations about this structure are normally performed by a mind inside the mind. The agent tries to find actions that determine the world to be as high as possible on the preference order, given the knowledge about it. Now, does it really help?
Yes, as it makes clear that what you’re talking about is a useful reduction of “preference”, unrelated to the common, “felt” meaning of “preference”. That alleviates the need to further discuss that portion of the reduction.
The next step of reduction would be to unpack your phrase “determine the world”… because that’s where you’re begging the question that the agent is determining the world, rather than determining the thing it models as “the world”.
So far, I have seen no-one explain how an agent can go beyond its own model of the world, except as perceived by another agent modeling the relationship between that agent and the world. It is simply repeatedly asserted (as you have effectively just done) as an obvious fact.
But if it is an obvious fact, it should be reducible, as “preference” is reducible, should it not?
That’s why philosophy is such a bog, and why it’s necessary to arrive at however insignificant but technical conclusions in order to move forward reliably.
I chose the articles in the comment above because they were in surface-match with what you are talking about, as a potential point on establishing understanding. I asked basically how you can characterize your agreement/disagreement with them, and how it carries over to the preference debate.
And I answered that I agree with them, and that I considered it foundational material to what I’m talking about.
Indeed, which is why I’d now like to have the answer to my question, please. What definition of “preferences” are you using, such that an alarm system, thermostat, and human all have them? (Since this is not the common, non-metaphorical usage of “preference”.)
Preference is order on the lotteries of possible worlds (ideally established by expected utility), usually with agent a part of the world. Computations about this structure are normally performed by a mind inside the mind. The agent tries to find actions that determine the world to be as high as possible on the preference order, given the knowledge about it. Now, does it really help?
Yes, as it makes clear that what you’re talking about is a useful reduction of “preference”, unrelated to the common, “felt” meaning of “preference”. That alleviates the need to further discuss that portion of the reduction.
The next step of reduction would be to unpack your phrase “determine the world”… because that’s where you’re begging the question that the agent is determining the world, rather than determining the thing it models as “the world”.
So far, I have seen no-one explain how an agent can go beyond its own model of the world, except as perceived by another agent modeling the relationship between that agent and the world. It is simply repeatedly asserted (as you have effectively just done) as an obvious fact.
But if it is an obvious fact, it should be reducible, as “preference” is reducible, should it not?
Hmm… Okay, this should’ve been easier if the possibility of this agreement was apparent to you. This thread is thereby merged here.