Note that probability is also in the mind, but yet your see all the facts through it, and you can’t ever revoke it, each mind is locked in its subjectively objective character. What do you think of that?
I think that those things have already been very well explained by Eliezer—so much so that I assumed that you (and the others participating in this discussion) would have already internalized them to the same degree as I have, such that asserting “preferences” to be “about” things would be a blatantly obvious instance of the mind projection fallacy.
That’s why, early on, I tended to just speak as though it was bloody obvious, and why I haven’t been painstakingly breaking it all out piece by piece, and why I’ve been baffled by the argument, confusion, and downvoting from people for whom this sort of basic reductionism ought to be a bloody simple matter.
Oh, and finally, I think that you still haven’t given your definition of “preference”, such that humans and alarm systems both have it, so that we can then discuss how it can then be “about” something… and whether that “aboutness” exists in the thing having the preference, or merely in your mental model of the thing.
I think that those things have already been very well explained by Eliezer
That in reply to a comment full of links to Eliezer’s articles. You also didn’t answer my comment, but wrote some text that doesn’t help me in our argument. I wasn’t even talking about preference.
I know. That’s the problem. See this comment and this one, where I asked for your definition of preference, which you still haven’t given.
You also didn’t answer my comment, but wrote some text that doesn’t help me in our argument.
That’s because you also “didn’t answer my comment, but wrote some text that doesn’t help me in our argument.” I was attempting to redirect you to answering the question which you’ve now ducked twice in a row.
Writing text that doesn’t help is pointless and mildly destructive. I don’t see how me answering your questions would help this situation. Maybe you have the same sentiment towards answering my questions, but that’s separate from reciprocation. I’m currently trying to understand your position in terms of my position, not to explain to you my position.
Writing text that doesn’t help is pointless and mildly destructive. I don’t see how me answering your questions would help this situation.
We reached a point in the discussion where it appears the only way we could disagree is if we had a different definition of “preference”. Since I believe I’ve made my definition quite clear, I wanted to know what yours is.
It might not help you, but it would certainly help me to understand your position, if you are not using the common definition of preference.
Maybe you have the same sentiment towards answering my questions, but that’s separate from reciprocation.
I asked you first, and you responded with (AFAICT) a non-answer. You appear to have been projecting entirely different arguments and thesis on to me, and posting links to articles whose conclusions I appear to be more in line with than you are—again, as far as I can tell.
So, I actually answered your question (i.e. “what do you think?”), even though you still haven’t answered mine.
You appear to have been projecting entirely different arguments and thesis on to me, and posting links to articles whose conclusions I appear to be more in line with than you are—again, as far as I can tell.
That’s why philosophy is such a bog, and why it’s necessary to arrive at however insignificant but technical conclusions in order to move forward reliably.
I chose the articles in the comment above because they were in surface-match with what you are talking about, as a potential point on establishing understanding. I asked basically how you can characterize your agreement/disagreement with them, and how it carries over to the preference debate.
I asked basically how you can characterize your agreement/disagreement with them, and how it carries over to the preference debate.
And I answered that I agree with them, and that I considered it foundational material to what I’m talking about.
That’s why philosophy is such a bog, and why it’s necessary to arrive at however insignificant but technical conclusions in order to move forward reliably.
Indeed, which is why I’d now like to have the answer to my question, please. What definition of “preferences” are you using, such that an alarm system, thermostat, and human all have them? (Since this is not the common, non-metaphorical usage of “preference”.)
Preference is order on the lotteries of possible worlds (ideally established by expected utility), usually with agent a part of the world. Computations about this structure are normally performed by a mind inside the mind. The agent tries to find actions that determine the world to be as high as possible on the preference order, given the knowledge about it. Now, does it really help?
Yes, as it makes clear that what you’re talking about is a useful reduction of “preference”, unrelated to the common, “felt” meaning of “preference”. That alleviates the need to further discuss that portion of the reduction.
The next step of reduction would be to unpack your phrase “determine the world”… because that’s where you’re begging the question that the agent is determining the world, rather than determining the thing it models as “the world”.
So far, I have seen no-one explain how an agent can go beyond its own model of the world, except as perceived by another agent modeling the relationship between that agent and the world. It is simply repeatedly asserted (as you have effectively just done) as an obvious fact.
But if it is an obvious fact, it should be reducible, as “preference” is reducible, should it not?
A good reply, if only you approached the discussion this constructively more often.
Note that probability is also in the mind, but yet your see all the facts through it, and you can’t ever revoke it, each mind is locked in its subjectively objective character. What do you think of that?
I think that those things have already been very well explained by Eliezer—so much so that I assumed that you (and the others participating in this discussion) would have already internalized them to the same degree as I have, such that asserting “preferences” to be “about” things would be a blatantly obvious instance of the mind projection fallacy.
That’s why, early on, I tended to just speak as though it was bloody obvious, and why I haven’t been painstakingly breaking it all out piece by piece, and why I’ve been baffled by the argument, confusion, and downvoting from people for whom this sort of basic reductionism ought to be a bloody simple matter.
Oh, and finally, I think that you still haven’t given your definition of “preference”, such that humans and alarm systems both have it, so that we can then discuss how it can then be “about” something… and whether that “aboutness” exists in the thing having the preference, or merely in your mental model of the thing.
That in reply to a comment full of links to Eliezer’s articles. You also didn’t answer my comment, but wrote some text that doesn’t help me in our argument. I wasn’t even talking about preference.
I know. That’s the problem. See this comment and this one, where I asked for your definition of preference, which you still haven’t given.
That’s because you also “didn’t answer my comment, but wrote some text that doesn’t help me in our argument.” I was attempting to redirect you to answering the question which you’ve now ducked twice in a row.
Writing text that doesn’t help is pointless and mildly destructive. I don’t see how me answering your questions would help this situation. Maybe you have the same sentiment towards answering my questions, but that’s separate from reciprocation. I’m currently trying to understand your position in terms of my position, not to explain to you my position.
We reached a point in the discussion where it appears the only way we could disagree is if we had a different definition of “preference”. Since I believe I’ve made my definition quite clear, I wanted to know what yours is.
It might not help you, but it would certainly help me to understand your position, if you are not using the common definition of preference.
I asked you first, and you responded with (AFAICT) a non-answer. You appear to have been projecting entirely different arguments and thesis on to me, and posting links to articles whose conclusions I appear to be more in line with than you are—again, as far as I can tell.
So, I actually answered your question (i.e. “what do you think?”), even though you still haven’t answered mine.
That’s why philosophy is such a bog, and why it’s necessary to arrive at however insignificant but technical conclusions in order to move forward reliably.
I chose the articles in the comment above because they were in surface-match with what you are talking about, as a potential point on establishing understanding. I asked basically how you can characterize your agreement/disagreement with them, and how it carries over to the preference debate.
And I answered that I agree with them, and that I considered it foundational material to what I’m talking about.
Indeed, which is why I’d now like to have the answer to my question, please. What definition of “preferences” are you using, such that an alarm system, thermostat, and human all have them? (Since this is not the common, non-metaphorical usage of “preference”.)
Preference is order on the lotteries of possible worlds (ideally established by expected utility), usually with agent a part of the world. Computations about this structure are normally performed by a mind inside the mind. The agent tries to find actions that determine the world to be as high as possible on the preference order, given the knowledge about it. Now, does it really help?
Yes, as it makes clear that what you’re talking about is a useful reduction of “preference”, unrelated to the common, “felt” meaning of “preference”. That alleviates the need to further discuss that portion of the reduction.
The next step of reduction would be to unpack your phrase “determine the world”… because that’s where you’re begging the question that the agent is determining the world, rather than determining the thing it models as “the world”.
So far, I have seen no-one explain how an agent can go beyond its own model of the world, except as perceived by another agent modeling the relationship between that agent and the world. It is simply repeatedly asserted (as you have effectively just done) as an obvious fact.
But if it is an obvious fact, it should be reducible, as “preference” is reducible, should it not?
Hmm… Okay, this should’ve been easier if the possibility of this agreement was apparent to you. This thread is thereby merged here.