“I’m only enjoying that wine because of what it signals”
“I’m only enjoying that food because I know it’s organic”
“I’m only enjoying that movie scene because I know what happened before it”
It might be helpful for me to figure out whether I’m “actually” enjoying the wine, or if it’s a sort of a crony belief: disentangling those is useful to make better decisions for myself, in say, deciding to go to a wine-tasting if status-boost among those people isn’t relevant to me.
Perhaps similarly, I’m better off knowing if my knowledge of whether this food item is organic is interfering with my taste experience.
But then in the movie example, no one would dispute the knowledge is relevant to the experience! Going back to our earlier ones, maybe just the knowledge there was relevant, and “genuinely” making it a better experience?
Maybe my degree of liking a food item is a function of both “knowledge of organic origin” and “chemical interactions with tongue receptors” just like my degree of liking of a movie is a function of both “contextual buildup from the narrative” and “the currently unfolding scene”?
Some questions to think about:
Do some of these point legitimately or illegitimately at self-deception?
“I only look pretty because of my cosmetics/surgery”
PS. I do think we’ve learnt that there are more ways words can be wrong than just being disguised queries. In any case, the “real question” I’m trying to ask is: what are some criteria for determining when it is appropriate to decouple, even if only internally? Are there cases where you might overcorrect for fallacies of compression? Is there possibly a compact set of criteria that we could workshop? I have an inkling there might.
Note: this question has been expanded and heavily reworded from a previous version, making some of the comments less comprehensible. Apologies.
Can you clarify how you’re using the “you’re only” part of those questions?
Do you mean:
“Y is the only significant cause or reason for X”
“Y is literally the only factor leading you to X”
“Y is a contributing factor for X, which you seem to be unaware of”
“Y is a “tipping point” factor, without which you would not be doing X”
“Y is the “tipping point” factor, the one least likely to be true / most malleable, without which you would not be doing X”
“Y is a a necessary factor, without it you wouldn’t do X no matter what else was contributing (but it is not sufficient in and of itself)”
“Y is a sufficient factor for X, regardless of any other considerations (but counterfactually you might X even if Y was false)
In common usage, Alex telling Beth “You’re only X because of Y” implies two things:
Alex thinks Y ought not to be a contributing factor towards X (or, sometimes, replace “contributing” with “controlling,” “significant,” or “sufficient”)
Alex thinks Beth is doing wrong by allowing Y to [influence] X
Commonly accepted defenses to “You’re only marrying Earnest for his money!”:
Not Y—“I’m not marrying Earnest”
Not X—“Earnest has no money”
Not Y → X—“I didn’t know Earnest had any money”
But also Z—“We’re in love, and I’m not getting any younger, and he’s hot, and our families get along, and, yes, he’s rich, but that’s a relatively small part of his appeal.”
I think I might have made a mistake in putting in too many of these at once. The whole point is to figure out which forms of accusations are useful feedback (for whatever), and which ones are not, by putting them very close to questions we think we’ve dissolved.
Take three of these, for example. I think it might be helpful to figure out whether I’m “actually” enjoying the wine, or if it’s a sort of a crony belief. Disentangling those is useful to make better decisions for myself, in say, deciding to go to a wine-tasting if status-boost with those people wouldn’t help.
Perhaps similarly, I’m better off knowing if my knowledge of whether this food item is organic is interfering with my taste experience.
But then in the movie example, no one would dispute the knowledge is relevant to the experience! Going back to our earlier ones, maybe just the knowledge there was relevant, and “genuinely” making it a better experience?
Maybe my degree of liking is a function of both “knowledge of organic origin” and “chemical interactions with tongue receptors” just like my degree of liking of a movie is a function of both “contextual buildup from the narrative” and “the currently unfolding scene”?
How about when you apply this to “you only upvoted that because of who wrote it”? Maybe that’s a little closer home.
Whether something is useful feedback depends on goals. Feedback is either useful for achieving a given goal or isn’t. You didn’t list any goals and thus it’s meaningless to speak with of those are useful feedback.
We might engage in mind reading and make up plausible goals that the person who’s the target of the accusations might have and discuss whether or not the feedback is useful for the goals that we imagine, but mind reading is generally problematic.
It seemed to me that avoiding fallacies of compression was always a useful thing (independent of your goal, so long as you have the time for computation), even if negligibly. Yet these questions seem to be a bit of a counterexample in mind, namely that I have to be careful when what looks like decoupling might be decontextualizing.
Importantly, I can’t seem to figure out a sharp line between the two. The examples were a useful meditation for me, so I shared them. Maybe I should rename the title to reflect this?
(I’m quite confused by my failure of conveying the point of the meditation, might try redoing the whole post.)
I’m quite confused by my failure of conveying the point of the meditation
I don’t think that you failed to communicate the point. It’s just that the approach of dealing with the issue at hand is seen as bad. And that’s actually useful feedback.
Thinking “they only disagree because they didn’t understand me for some reason that’s confusing to me” is not useful.
Goals are part of the meaning and thus any attempt to analyse the meaning independent of the goals is confused. For epistemic rationality the goal is usually about the ability to make accurate predictions and for instrumental rationality the goals are about achieving certain outcomes.
In Where to Draw the Boundaries, Zack points out (emphasis mine):
The one replies:
But reality doesn’t come with its joints pre-labeled. Questions about how to draw category boundaries are best understood as questions about values or priorities rather than about the actual content of the actual world. I can call dolphins “fish” and go on to make just as accurate predictions about dolphins as you can. Everything we identify as a joint is only a joint because we care about it.
No. Everything we identify as a joint is a joint not “because we care about it”, but because it helps us think about the things we care about.
There are more relevant things in there, which I don’t know if you have disagreements with. So maybe it’s more useful to crux with Zack’s main source. In Where to Draw the Boundary, Eliezer gives an example:
And you say to me: “It feels intuitive to me to draw this boundary, but I don’t know why—can you find me an intension that matches this extension? Can you give me a simple description of this boundary?”
I take it this game does not work for you without a goal more explicit than the one I have in the postscript to the question?
(Notice that inferring some aspects of the goal is part of the game; in the specific example Eliezer gave, they’re trying to define Art−which is as nebulous an example as it could be. Self-deception is surely less nebulous than Art.)
I was looking for this kind of engagement, which asserts/challenges either intension or extension:
You come up with a list of things that feel similar, and take a guess at why this is so. But when you finally discover what they really have in common, it may turn out that your guess was wrong. It may even turn out that your list was wrong.
You draw boundaries towards questions. I can ask many questions about wine:
“Do I enjoy drinking wine?”, “Do I get good value for money when I seek enjoyment by paying money for wine?”, “Is the wine inherently enjoyful?” and a bunch of others. Answering those questions is about drawing boundaries the same way as answering “Is a dolphin a fish?” is about drawing boundaries.
Your list doesn’t have any questions like that and thus there aren’t any boundaries to be drawn.
As far as the question of “What is a dolphin?” goes at Wikidata at the moment is our answer “A dolphin organisms known by a particular common name” because the word dolphin does not refer to a single species of animals or a taxon in the taxonomic tree. Speaking of dolphins when you reject categorizations that are not taxonomic accurate makes little sense in the first place.
As the links I’ve posted above indicate, no, lists don’t necessarily require questions to begin noticing joints and carving around them.
Questions are helpful however, to convey the guess I might already have and to point at the intension that others might build on/refute. And so...
Your list doesn’t have any questions like that
...I have had some candidate questions in the post since the beginning, and later even added some indication of the goal at the end.
EDIT: You also haven’t acknowledged/objected to my response to your “any attempt to analyse the meaning independent of the goals is confused”, so I’m not sure if that’s still an undercurrent here.
I have plenty of comments at Zack post you link and I don’t agree with it. As Thomas Khun argued, the fact that chemists and physicists disagree about whether helium is a molecule is no problem. Both communities have reasons to carve out the joints differently. Different paradigms have valid reasons to draw lines differently.
I think your analysis of “you’re only X because of Y” is missing the “you are doing it wrong” implicit accusation in the statement. Basically, the implied meaning, I think, is that while there are acceptable reasons to X, you are lacking any of them, but instead your reason for X is Y, which is not one of the acceptable reasons. Which is why your Z is a defense—claiming to have reasons in the acceptable set. And another defense might be to respond entirely to the implied accusation and explain why Y should be an OK reason to X. “You’re only enjoying that movie scene because you know what happened before it”—“Yeah, and what’s wrong with that?”
If I’m doing X wrong (in some way), it’s helpful for me to notice it. But then I notice I’m confused about when decoupling context is the “correct” thing to do, as exemplified in the post.
Rationalists tend to take great pride in decoupling and seeing through narratives (myself included), but I sense there might be some times when you “shouldn’t”, and they seem strangely caught up with embeddedness in a way.
[Question] Countering Self-Deception: When Decoupling, When Decontextualizing?
What’s the difference, conceptually, between these three noticings, if any?
It might be helpful for me to figure out whether I’m “actually” enjoying the wine, or if it’s a sort of a crony belief: disentangling those is useful to make better decisions for myself, in say, deciding to go to a wine-tasting if status-boost among those people isn’t relevant to me.
Perhaps similarly, I’m better off knowing if my knowledge of whether this food item is organic is interfering with my taste experience.
But then in the movie example, no one would dispute the knowledge is relevant to the experience! Going back to our earlier ones, maybe just the knowledge there was relevant, and “genuinely” making it a better experience?
Maybe my degree of liking a food item is a function of both “knowledge of organic origin” and “chemical interactions with tongue receptors” just like my degree of liking of a movie is a function of both “contextual buildup from the narrative” and “the currently unfolding scene”?
Some questions to think about:
Do some of these point legitimately or illegitimately at self-deception?
Are some of these a confusion of levels and others less so?
Are some of these instances of working wishful thinking?
Are some of these better seen as actions rather than rationalizations?
Other examples to meditate on simultaneously, with helpful variance in their sense of how solved each feels:
PS. I do think we’ve learnt that there are more ways words can be wrong than just being disguised queries. In any case, the “real question” I’m trying to ask is: what are some criteria for determining when it is appropriate to decouple, even if only internally? Are there cases where you might overcorrect for fallacies of compression? Is there possibly a compact set of criteria that we could workshop? I have an inkling there might.
Note: this question has been expanded and heavily reworded from a previous version, making some of the comments less comprehensible. Apologies.
Can you clarify how you’re using the “you’re only” part of those questions?
Do you mean:
“Y is the only significant cause or reason for X”
“Y is literally the only factor leading you to X”
“Y is a contributing factor for X, which you seem to be unaware of”
“Y is a “tipping point” factor, without which you would not be doing X”
“Y is the “tipping point” factor, the one least likely to be true / most malleable, without which you would not be doing X”
“Y is a a necessary factor, without it you wouldn’t do X no matter what else was contributing (but it is not sufficient in and of itself)”
“Y is a sufficient factor for X, regardless of any other considerations (but counterfactually you might X even if Y was false)
In common usage, Alex telling Beth “You’re only X because of Y” implies two things:
Alex thinks Y ought not to be a contributing factor towards X (or, sometimes, replace “contributing” with “controlling,” “significant,” or “sufficient”)
Alex thinks Beth is doing wrong by allowing Y to [influence] X
Commonly accepted defenses to “You’re only marrying Earnest for his money!”:
Not Y—“I’m not marrying Earnest”
Not X—“Earnest has no money”
Not Y → X—“I didn’t know Earnest had any money”
But also Z—“We’re in love, and I’m not getting any younger, and he’s hot, and our families get along, and, yes, he’s rich, but that’s a relatively small part of his appeal.”
I think I might have made a mistake in putting in too many of these at once. The whole point is to figure out which forms of accusations are useful feedback (for whatever), and which ones are not, by putting them very close to questions we think we’ve dissolved.
Take three of these, for example. I think it might be helpful to figure out whether I’m “actually” enjoying the wine, or if it’s a sort of a crony belief. Disentangling those is useful to make better decisions for myself, in say, deciding to go to a wine-tasting if status-boost with those people wouldn’t help.
Perhaps similarly, I’m better off knowing if my knowledge of whether this food item is organic is interfering with my taste experience.
But then in the movie example, no one would dispute the knowledge is relevant to the experience! Going back to our earlier ones, maybe just the knowledge there was relevant, and “genuinely” making it a better experience?
Maybe my degree of liking is a function of both “knowledge of organic origin” and “chemical interactions with tongue receptors” just like my degree of liking of a movie is a function of both “contextual buildup from the narrative” and “the currently unfolding scene”?
How about when you apply this to “you only upvoted that because of who wrote it”? Maybe that’s a little closer home.
Whether something is useful feedback depends on goals. Feedback is either useful for achieving a given goal or isn’t. You didn’t list any goals and thus it’s meaningless to speak with of those are useful feedback.
We might engage in mind reading and make up plausible goals that the person who’s the target of the accusations might have and discuss whether or not the feedback is useful for the goals that we imagine, but mind reading is generally problematic.
It seemed to me that avoiding fallacies of compression was always a useful thing (independent of your goal, so long as you have the time for computation), even if negligibly. Yet these questions seem to be a bit of a counterexample in mind, namely that I have to be careful when what looks like decoupling might be decontextualizing.
Importantly, I can’t seem to figure out a sharp line between the two. The examples were a useful meditation for me, so I shared them. Maybe I should rename the title to reflect this?
(I’m quite confused by my failure of conveying the point of the meditation, might try redoing the whole post.)
I don’t think that you failed to communicate the point. It’s just that the approach of dealing with the issue at hand is seen as bad. And that’s actually useful feedback.
Thinking “they only disagree because they didn’t understand me for some reason that’s confusing to me” is not useful.
Goals are part of the meaning and thus any attempt to analyse the meaning independent of the goals is confused. For epistemic rationality the goal is usually about the ability to make accurate predictions and for instrumental rationality the goals are about achieving certain outcomes.
In Where to Draw the Boundaries, Zack points out (emphasis mine):
There are more relevant things in there, which I don’t know if you have disagreements with. So maybe it’s more useful to crux with Zack’s main source. In Where to Draw the Boundary, Eliezer gives an example:
I take it this game does not work for you without a goal more explicit than the one I have in the postscript to the question?
(Notice that inferring some aspects of the goal is part of the game; in the specific example Eliezer gave, they’re trying to define Art−which is as nebulous an example as it could be. Self-deception is surely less nebulous than Art.)
I was looking for this kind of engagement, which asserts/challenges either intension or extension:
You draw boundaries towards questions. I can ask many questions about wine:
“Do I enjoy drinking wine?”, “Do I get good value for money when I seek enjoyment by paying money for wine?”, “Is the wine inherently enjoyful?” and a bunch of others. Answering those questions is about drawing boundaries the same way as answering “Is a dolphin a fish?” is about drawing boundaries.
Your list doesn’t have any questions like that and thus there aren’t any boundaries to be drawn.
As far as the question of “What is a dolphin?” goes at Wikidata at the moment is our answer “A dolphin organisms known by a particular common name” because the word dolphin does not refer to a single species of animals or a taxon in the taxonomic tree. Speaking of dolphins when you reject categorizations that are not taxonomic accurate makes little sense in the first place.
As the links I’ve posted above indicate, no, lists don’t necessarily require questions to begin noticing joints and carving around them.
Questions are helpful however, to convey the guess I might already have and to point at the intension that others might build on/refute. And so...
...I have had some candidate questions in the post since the beginning, and later even added some indication of the goal at the end.
EDIT: You also haven’t acknowledged/objected to my response to your “any attempt to analyse the meaning independent of the goals is confused”, so I’m not sure if that’s still an undercurrent here.
I have plenty of comments at Zack post you link and I don’t agree with it. As Thomas Khun argued, the fact that chemists and physicists disagree about whether helium is a molecule is no problem. Both communities have reasons to carve out the joints differently. Different paradigms have valid reasons to draw lines differently.
I think your analysis of “you’re only X because of Y” is missing the “you are doing it wrong” implicit accusation in the statement. Basically, the implied meaning, I think, is that while there are acceptable reasons to X, you are lacking any of them, but instead your reason for X is Y, which is not one of the acceptable reasons. Which is why your Z is a defense—claiming to have reasons in the acceptable set. And another defense might be to respond entirely to the implied accusation and explain why Y should be an OK reason to X. “You’re only enjoying that movie scene because you know what happened before it”—“Yeah, and what’s wrong with that?”
Yes, this is the interpretation.
If I’m doing X wrong (in some way), it’s helpful for me to notice it. But then I notice I’m confused about when decoupling context is the “correct” thing to do, as exemplified in the post.
Rationalists tend to take great pride in decoupling and seeing through narratives (myself included), but I sense there might be some times when you “shouldn’t”, and they seem strangely caught up with embeddedness in a way.