A word is fundamentally a semantic stopsign when the whole purpose of the word is to cover up detail and allow you to communicate without having to address what’s underneath.
As I mentioned before, this isn’t always a problem. If someone says “I just realized I should pee before we leave”, and then goes and pees, then there really isn’t an issue there. We can still look more closely, if we want, but we aren’t going to find anything interesting that changes the moral of the story. They realized that if they don’t pee before leaving, they will end up with a full bladder and no convenient way to empty it. Does it mean they would otherwise have to waste time pulling over? That they won’t have a chance and will pee themselves? It doesn’t really matter, because it is sufficiently clear that it’s in everyone’s best interest to let the guy go pee before embarking on the road trip with him. The right answer is overconstrained here.
Similarly, “Bill should donate” can be unproblematic—if the details being glossed over don’t change the story. Sometimes they do.
If you say “Bill gates should!” and then say “Well, what I really mean by that is… that I would like it, personally, because it would make the world better according to my values”, then that changes things drastically. “Should!” has a moral imperative that “I’d personally like...” simply does not—unless it somehow really matters what you’d like. Once you get rid of “should” you have to expose the driving force behind any imperative you want to make. Is it that you just want his money for yourself? Do you have a well thought out moral argument that Bill should find compelling? If so, what is it, and why should Bill find it compelling?
Very frequently, people will run out of justification before their point becomes compelling. I have a friend, for example, who thinks “Health care ‘should’ be ‘free’”, and who gets quite grumpy if you point out her lack of an actual argument. Fundamentally, what she means is “I want healthcare, and I don’t want to pay for it”, but saying it that way would make it way too obvious that she doesn’t actually have a compelling reason why anyone should want to pay for her healthcare—so she sticks with “it should be free”. This isn’t a political statement, btw, since I’m not saying that good arguments don’t exist, or that “health care ‘shouldn’t’ be ‘free’”. It’s just that she wants the world to be a certain way that would be convenient to her, and the way things currently are violate a “fairness” intuition she has, so she’s upset about it without really understanding what to do about it. She doesn’t see any reason that everyone else would feel compelling, and so she moralizes in the hopes that the justification is either intuitively obvious to everyone else, or else that people will care that she feels that way.
And that’s an empirical prediction. If you say “You should do X” and “expect-2“ at them to do it, and they do, then clearly your moralizing had sufficient force and you were right to think you could stop at that level of detailed support. If you start expecting at your dog to sit, and you’ve never taught it to sit, then there’s just no way to fill in the details behind “the dog should sit” which make any sense. “The world would be better if it sat”—sure, let’s grant that. What do you think you’re accomplishing by announcing this fact instead of teaching the dog to sit? Notice how that statement is oddly out of place? Notice how the “should” and “expectation-2” kinda deflate once you recognize that the expectation will necessarily be falsified?
Returning to the “irrational fear” example, the statement is “I shouldn’t be afraid”. If you follow that up with a lack of fear, then fine. Otherwise, you have a contradiction. Get rid of the “should”, and see what happens. “I shouldn’t be afraid” → “I feel fear even though there is no danger”. Oh, there’s no danger? Now that you’re making this claim explicitly, how do you know? How come it looks like you think you’re going to splat your head on the concrete below? What does your brain anticipate happening, and why is it wrong? Have you checked to make sure the concrete anchors have been set properly? Are you sure you aren’t missing something that could lead to your head splatting on the concrete below?
When you can confidently answer “Yes, I have checked the knots, I have checked the anchors, and there is no way my head will splat on the concrete below. My brain was anticipating falling without any actual cause, and I can see now that there are no paths to that outcome”, then how do you maintain the fear? What are you afraid of, if that can’t happen? It’s like trying to say “I don’t believe it’s raining, but it is raining”. Even if you feel the same “fear” sensations, once you’re confident that they don’t mean anything it just becomes “I feel these sensations which don’t mean anything”. Okay, so what? If you’re sure they don’t mean anything then go climb. We call that “excitement”, btw.
When you “should” or “expect” at a thing, and your expectations are being falsified rather than validated, then that’s the cue to look deeper. It means that the stuff being buried underneath isn’t working out like you think it should, so you’re probably wrong about something. If you’re trying and failing to rock climb without fear, it probably means you were flinching away from actually addressing the dangers and that you need to check your knots, check whether you’ve done enough checking, and then once you do that you will find yourself climbing without being burdened by fear. If you’re trying to say that someone should do something you want them to do and they aren’t doing it, it probably means you have a gap in your model about why they would care or else how they would know—and once you figure that out you’ll find yourself happily explaining more, or creating a reason for them to care, or realizing that your emotions had you acting out of line—depending on the case at hand.
That sorta make sense? I know it’s a bit far from intuitive
It makes sense as, like, a discussion of “this is sometimes what’s going on when people use the word should”. I’m far from convinced that that’s always what’s going on, or that it’s what’s going on in this particular situation.
Like, I feel like you’re taking something that should be at the level of “this is a hypothesis to keep in mind” and elevating it to “this is what’s true”.
(Oh hey, I used “should”. What do I mean by that? I guess kind of the same as if I said “if you add this list of numbers, you should get zero”. That is, I feel like you’re making a mistake to hold this thing at a level of confidence different from what I think is the correct level of confidence. Was there more to my “should” than that? Quite possibly, but… I feel like you’re going to have a confident prediction about what that was? And I want to point out that while it’s possible you know better than me what’s going on inside my head, it’s not the default guess.)
I guess I also want to point out that the sequence of events here is, in part:
Richard says a thing.
TAG and then yourself use the word “expect” to suggest Richard was being unreasonable.
gjm uses the word “should” in a reply to your “expect” to suggest Richard was perhaps being reasonable after all.
Big discussion about the words “expect” and “should”.
Notably, Richard never used either of those words himself. So for example, you say “When you “should” or “expect” at a thing, and your expectations are being falsified rather than validated, then that’s the cue to look deeper.” Well, is Richard “should”ing or “expect”ing? I could believe he’s doing the thing just with different words, but I don’t think it’s been established that he is, and any discussion of “this is what these words mean” is completely besides the point.
Not that I have anything against long besides-the-point digressions, but I do think it’s good for everyone to be aware that’s what they are.
(Gonna limit myself to two more replies after this, and depending on motivation I might not even do that many.)
Like, I feel like you’re taking something that should be at the level of “this is a hypothesis to keep in mind” and elevating it to “this is what’s true”.[...]
The hypothesis “The word ‘should’ is being used to allow communication while motivatedly covering up detail that is necessary to address” is simply one hypothesis to keep in mind, and doesn’t apply to every use of the word “should”. However “The word still functions to allow communication without having to get into further detail” is just something that is always true. What would a counterexample even look like?
I’ve tried to be explicit in the last two comments that this isn’t always a bad thing. Your use of the word “should” here seems pretty reasonable to me. That doesn’t mean that there isn’t more detail being hidden (mistake according to what values?), just that we more or less expect that the remaining ambiguity isn’t likely to be important so stopping at this level of precision is appropriate.
I feel like I’m kinda saying the same thing as last time though. Am I missing what your objection is? Do you see why “semantic stopsign” shouldn’t be seen as a boo light?
That is, I feel like you’re making a mistake to hold this thing at a level of confidence different from what I think is the correct level of confidence.
This does highlight a potential failure mode though. Determining which level of confidence is “correct” for someone else requires you to actively know something about what they’ve seen and what they’d be able to see. It’s pretty hard to justify until you can see them failing to see something.
I feel like you’re going to have a confident prediction about what that was? And I want to point out that while it’s possible you know better than me what’s going on inside my head, it’s not the default guess.)
Not in this case, no. Because it’s a fairly reasonable use there’s no sign of failure, and therefore nothing to suggest what you might be doing without realizing you’re doing.
If your answer to 54+38 is 92, I don’t have any way of knowing how you got there other than it worked. Maybe you used a calculator, or maybe you had a lucky guess. If you say 82, then I can make an educated guess that you used the method they teach in elementary school and forgot to carry the one. If you get 2052, I can guess pretty confidently that you used a calculator and hit the “x” button instead of the “+” button.
“Knowing what’s going on inside someones head better than they do” just means recognizing failure modes that they don’t recognize.
Well, is Richard “should”ing or “expect”ing? I could believe he’s doing the thing just with different words, but I don’t think it’s been established that he is, and any discussion of “this is what these words mean” is completely besides the point.
Richard is not on trial. He didn’t do anything anti-social that calls for such a trial. It would be presumptuous to speak for him, and inappropriately hostile to accuse him. I’m uncomfortable with the implication that this is what the “point” is, and perhaps should have disclaimed this explicitly early on. Heck, his main point isn’t even wrong or unreasonable.
It’s just that the fact that he felt it to be necessary to say suggests something about his unspoken expectations—because Chapman’s/TAG’s expectations wouldn’t have led to that comment feeling relevant even though it’s still fairly true. Productive conversation requires addressing the actual disagreement rather than talking past each other, and when those disagreements are buried in the underlying expectations, this means pointing the conversation there. That’s why TAG basically asked “What do you expect?”, and why it was the correct thing to do given the signs of expectation mismatches. Gjm responded to this by saying that “expect doesn’t always mean expect”, which is an extremely common misconception, and understanding why that is wrong is important—not just for the ability to have these kinds of discussions productively, but also that.
However “The word still functions to allow communication without having to get into further detail” is just something that is always true. What would a counterexample even look like?
Hm. So I thought you were referring to a word like “cult” or “fascist”, where part of what’s going on is mixing normative and descriptive claims in a way that obfucates. But now it seems you might have meant a word like “tree” or “lawyer” or “walk”; that is, that just about every word is a semantic stopsign?
And then when you say “stop allowing “should” to function as a semantic stop sign”, you mean “dig below the word “should” to its referent”, much as I might do to explain the words “tree” or “lawyer” or “walk” to someone unfamiliar with them?
But as gjm and I have both noted, he did that. Like, it sounds like that part of the conversation went: “you should do X” / “I did X” / “this isn’t about you, no one does X”.
I confess I do not understand this.
(It’s not that I thought you’d necessarily see my “should” above as bad. I just expected you’d think there was more going on with it than I thought was going on with it.)
mistake according to what values?
Not sure if this part is particularly relevant given my apparent misunderstanding of “semantic stopsign”. But to be clear, I meant a factual mistake.
“Knowing what’s going on inside someones head better than they do” just means recognizing failure modes that they don’t recognize.
Well, yes, and like I said it’s possible. And in the example you gave, I agree those would be good guesses.
But also, it seems common to me that someone will think they recognize someone a failure mode in someone else, that that person doesn’t recognize; and that they’ll be wrong. Things like, “oh, the reason you support X is because you haven’t read Y”. Or “I would have liked that film if I hadn’t noticed the subtext; so when you say you liked that film, you must not have noticed the subtext; so when I point out the subtext you will stop liking the film”.
Something you said pattern matches very strongly to this, for me: “Heh, I understand that perspective. … I distinctly remember the conversation where I was insisting that...”.
It’s entirely possible that your framing there is priming me to read things into your words that you aren’t putting there yourself, and if I’m doing that then I apologize.
Gjm responded to this by saying that “expect doesn’t always mean expect”, which is an extremely common misconception
Well, honestly I’m still not convinced this is wrong. I had thought that getting into “should” might help clear this up, but it hasn’t, so.
It just doesn’t seem at all problematic to me that “I expect you to show up on time from today on” and “I expect the sun to rise tomorrow” are using two different senses of “expect”. It sounds like you think something like… if the speaker of the first one meditates on wanting, they will either anticipate that the listener will show up on time, or they will stop caring whether the listener shows up on time? I’m guessing that’s not a good description of what you think, but that’s what I’m getting.
Now to be clear, I wouldn’t describe this “expect” the same way gjm described “expect-2″. He made the distinction: “you expect-1 X when you think X will probably happen; you expect-2 X when you think that X should happen”. And I think I’d change expect-2 to something like “when you think that X should happen, and are low-key exercising some authority in service of causing X to happen”. Like “I expect you to show up on time” sounds to me like an order backed by corporate hierarchy, and “England expects every man will do his duty” sounds like an order backed by the authority of the crown. “I expect open source developers to be prompt at responding to bug reports” sounds like it’s exercising moral authority. And if we make this distinction, then it does not seem to me like Richard was expect-2ing anything of Chapman.
But that doesn’t seem particularly relevant, because:
understanding why that is wrong is important
So I agree this seems like the sort of thing that’s important-in-general to know about. If the word “expect” has only one common meaning, then I certainly want to know that; and if it has two, then I expect you want to know that.
But it still doesn’t seem like it matters in this specific case. This conversation stems from hypothesizing about what’s going on inside Richard’s head, and he didn’t use the word in question. So like,
“Richard is expecting ___” / “That seems like a fine thing for him to do, because “expect” can also mean ___” / “No it can’t, because...”
It seems like the obvious thing to do here, if we’re going to hypothesize about what’s going on in Richard’s head, is to just stop using the word “expect”? Going into “what does “expect” mean” seems like the opposite of productive disagreement.
For sake of brevity I’m going to respond just to the parts I see as more likely to be fruitful, but feel free to demand a response to anything I skip over and I’ll give one.
Hm. So I thought you were referring to a word like “cult” or “fascist”, where part of what’s going on is mixing normative and descriptive claims in a way that obfucates. But now it seems you might have meant a word like “tree” or “lawyer” or “walk”; that is, that just about every word is a semantic stopsign?
Yes, closer to the latter. There’s always more underneath, even with words like “tree”.
However, the word “should” is a bit different, in ways we touch on below.
(It’s not that I thought you’d necessarily see my “should” above as bad. I just expected you’d think there was more going on with it than I thought was going on with it.)
Hm, okay.
And maybe there is. “Factual mistake” isn’t perfectly defined either. We could get further into the ambiguities there, but it’s all going to feel “yeah but that doesn’t matter” because it doesn’t. It’s defined well enough for our purposes here.
Well, yes, and like I said it’s possible. [...] But also, it seems common to me that someone will think they recognize someone a failure mode in someone else, that that person doesn’t recognize; and that they’ll be wrong. Things like, “oh, the reason you support X is because you haven’t read Y”.
The important distinction to track here is whether the person is closing the loop or just saying whatever first comes to mind with no accountability. When the prediction fails, is there surprise and an update? Or do the goalposts keep moving and moving? The latter is obviously common and almost always leads to wrongness, but that isn’t a mark on the former which actually works pretty well.
Something you said pattern matches very strongly to this, for me: “Heh, I understand that perspective. … I distinctly remember the conversation where I was insisting that...”.
I can see what you mean, but it’s pretty different. Fixed goal posts, consistent experience of observing minds changing when they reach them, known base rates, calibrating to the specifics of what is and isn’t said, etc. I can explain if you want.
It’s entirely possible that your framing there is priming me to read things into your words that you aren’t putting there yourself, and if I’m doing that then I apologize.
No worries. I can tell you’re working in entirely good faith here. I’m not confident that I can convey what I’d like to convey in the amount of effort you’re willing to put into this conversation, but if I can’t it’s definitely because I’ve failed to cross the inferential distance and not because your mind isn’t open.
Well, honestly I’m still not convinced this is wrong. I had thought that getting into “should” might help clear this up, but it hasn’t, so.
“Convinced” is a high bar, and we have spent relatively few words on the topic. Really grokking this stuff requires “doing” rather than just “talking about”. Meaning, actually playing one or both sides of “attempting to hold onto the frame where a failing ‘should’/‘expect-2’ is logically consistent and not indicative of wrongness” and “systematically tearing that frame apart by following the signs to the missing truth”. And then doing it over and over over a wide range of things and into increasingly counterintuitive areas, until the idea that “Maybe this time it’ll be different!” stops feeling realistic and starts feeling like a joke. Working through each example usually takes an hour or two of back and forth until all the objections are defeated and the end result recognized as inevitable rather than merely “plausible”.
I’d count it a success if you walk away skeptical, but with a recognition that you can’t rule it out either, and a good enough sketch that you can start filling in the details.
It just doesn’t seem at all problematic to me that “I expect you to show up on time from today on” and “I expect the sun to rise tomorrow” are using two different senses of “expect”. It sounds like you think something like… if the speaker of the first one meditates on wanting, they will either anticipate that the listener will show up on time, or they will stop caring whether the listener shows up on time? I’m guessing that’s not a good description of what you think, but that’s what I’m getting.
Yes! Not quite, but close!
So yes, those two are different. And yes, “low-key exercising authority” is a key distinction to make here. However, it’s not the case that expecting your employee to show up on time is simultaneously “low-key exercising authority” AND “not a prediction”. It’s either still a prediction, or it’s not exercising authority. The mechanism of exercising authority is through predicting people will do as you direct them to, and if you lose that then you don’t actually have authority and are simply engaging in make believe.
This is a weird concept, but “intentions” and “expectations” are kinda the same thing related to differently. This is why your mom could tell you “You are going to start behaving right now!*” and you don’t get confused why she’s giving an order as if it’s a prediction. It’s why your coach in high school would say “You have to believe you can win!”, and why some kids really did choke under pressure and under-perform relative to what they were otherwise capable of. When it comes to predicting whether you’ll get a glass of water when you’re thirsty, you can trivially realize either prediction, so you solve this ambiguity by choosing to predict you’re going to get what you want and act so as to create that reality. If you want to start levitating a large object with your mind, you can’t imagine that working so it gets really hard to even intend to do it. That’s the whole “use the try harder Luke” stuff. When it gets hard to expect success, it gets hard to even try. (Scott’s writing on “predictive processing” touches on this equivalence)
If you’ve been trying to low-key authority at someone to show up on time, and then you start looking real closely at what you’re doing, one potential outcome is that you simply anticipate they’ll show up, yes. In this case, it’s like… think of the difference between “I expect myself to get up and run a mile today” when you really don’t wanna and you can feel the tension that exercising authority is creating and you’re not entirely sure it’ll keep working… and then compare that to what it feels like when “run a mile” is just what you do, like getting a glass of water when you’re thirsty, or brushing your teeth in the morning (hopefully). It may still suck, and you may not *like* running, but you notice your feet start walking out the door almost “on their own” because “not running” isn’t actually a thing anymore. Any tension there is the uncertainty you’re trying to deny in order to insist reality bend to your will, and when you look closely and find out that it’s definitely gonna happen, it goes away because you’re not uncertain anymore.
In the other extreme when you find that it’s definitely not gonna happen, you “stop caring” in the sense that you no longer get bothered by it, but not in the sense that you’d no longer appreciate the guy showing up on time, and not in the sense that you stop exerting optimization pressure in that direction. It actually frees you up to optimize a lot more in that direction, because you’re no longer navigating by a bad map, you no longer come off as passive aggressive or aggressive and lacking in empathy, and you’re not bound to expecting success before you do something. So for example, that recent case I referenced involved me feeling annoyed by an acquaintance’s condescending douchery. Once I looked at where I was going wrong (why he was the way he was, why my annoyance had no authority over him, etc), I no longer “cared” in the sense that his behavior didn’t annoy me anymore. But also, that lack of annoyance opened up room for me to challenge and tease him without it being perceived as (or being) an ego threat, and now I actually like the guy and rather than condescending to me he regularly asks for my input on things (even though his personality limitations are still there).
In the middle, you realize you don’t actually know whether or not they’re going to start showing up on time. Instead of asserting “I expect!” hoping for the best, you realize that you can’t “decide” what they do, but they can, so you ask: “Do you think you’re going to start coming in on time?”. And you wait for an answer. And you look to see what this means they will actually do. This feels very different from the other side. Instead of feeling like you’re being projected at, it feels like you’re being seen. You can’t just “say” you will, because your boss is no longer looking to see if you’ll prop up his semi-delusional fantasy a little longer; he’s looking to see what you will do. Instead of being pushed into a role where you grumble “Yes sir..” because you have no choice and having things happen to you that are out of your control, the weight of the decision is on your shoulders, and you feel it. Are you going to start showing up on time?
I’m not going to demand anything, especially when I don’t plan to reply again after this. (Or… okay, I said I’d limit myself to two more replies but I’m going to experiment with allowing myself short non-effortful ones if I’m able to make them. Like, if you want to ask questions that have simple answers I’m not going to rule out answering them. But I am still going to commit to not putting significant effort into further replies.)
But the thing that brought me into this conversation was the semantic stop sign thing. It still seems to me like that part of the conversation went “you should do X” / “I did X” / “this isn’t about you, no one does X”. And based on my current understanding of what you meant by “semantic stopsign”, I agree that gjm didn’t do X, and it feels like you’ve ignored both him and myself trying to point this out.
I expect there’s a charitable explanation for this, but I honestly don’t have one in mind.
I can see what you mean, but it’s pretty different. Fixed goal posts, consistent experience of observing minds changing when they reach them, known base rates, calibrating to the specifics of what is and isn’t said, etc. I can explain if you want.
Mm, I think I know what you mean, but… I don’t think I trust that you’re at that level?
So, okay, I have to remember here that this thread originally came up when I felt like you’d think you knew better than me what was in my own head, and then fair play to you, you didn’t. But then you defended the possibility of doing that, without having done it in that specific case. So this bit has to be caveated with a “to the extent that you actually did the thing”, which I kind of think you did a bit with gjm and Richard but I’m not super sure right now.
As I said, I agree it’s possible to know what’s going on in someone else’s mind better than they are. I agree that the things you say here make it more likely than otherwise.
But at best they’re epistemically illegible; you can’t share the evidence that makes you confident here, in a way that someone else can verify it. And it’s worse than that, because they’re the sort of thing I feel like people often self-delude about. Which is not to say you’re doing that; only that I don’t think I can or should rule out the possibility.
So these situations may seem very different to you, and you may be right. But as a reader, they look very similar, and I think I’m justified in reacting to them similarly.
There are ways to make me more disposed to believe you in situations like this, which I think roughly boil down to “make it clear that you have seen the skulls”. I’ve written another recent comment on that subject, though only parts of it are relevant here.
No good thing to quote here, but re expecting: I feel like you’re saying “these are the same” and then describing them being very different.
So sure, I expect-2 my employee to show up on time, and then I do this mental shift. Then either I expect-1 him to show up on time; or I realize I don’t expect-1 him to show up on time, and then I can deal with that.
And maybe this is a great mental shift to make. Actually I’d say I’m pretty bullish on it; this feels to me more like “oh, you mean that thing, yeah I like that thing” than like “oh, that’s a thing? Huh” or “what on earth does that mean?”
So I don’t think the point of friction here is about whether or not I understand the mental shift. I think the point of friction is, I have no idea why you described it as “there’s only one kind of expect”.
Like… even assuming I’ve made this mental shift, that’s not the same as just expect-1ing him to show up on time? This feels like telling me that “a coin showing heads” is the same as “a coin that I’ve just flipped but not yet looked at”, because once I look I’ll either have the first thing or I’ll be able to deal with not having it. Or that “a detailed rigorous proof” is the same as “a sketch proof that just needs filling out”, because once I fill out the details I’ll either have the first thing or I’ll be able to deal with the fact that my proof was mistaken.
And that’s from the perspective of the boss. Suppose I’m the employee and my boss says that to me. I can’t make the mental shift for her. It probably wouldn’t go down very well to ask “ah, but do you predict that I’ll show up on time? Because if you don’t, then you should come to terms with that and work with me to...”
Maybe if my boss did make this mental shift, then that would be good for me too. But given that she hasn’t, I kind of need to know: when she used the word “expect” there, was that expect-1 or expect-2? Telling me about a mental shift she could make in the way she relates to expectations seems unhelpful. Telling me that the two kinds of expectations are the same seems worse than useless.
I’m not going to demand anything, especially when I don’t plan to reply again after this.
“Demand” is just a playful way of saying it. Feel free to state that you think what I skipped over is important as well. Or not.
But the thing that brought me into this conversation was the semantic stop sign thing. It still seems to me like that part of the conversation went “you should do X” / “I did X” / “this isn’t about you, no one does X”. And based on my current understanding of what you meant by “semantic stopsign”, I agree that gjm didn’t do X, and it feels like you’ve ignored both him and myself trying to point this out.
I’m confused. I assume you meant to say that you agree with gjm that he *did* do X, and not that you agree with me that he didn’t?
Anyway, “You should do X”/”I did X”/”No one does X” isn’t an accurate summary. To start with, I didn’t say he *should* do anything, because I don’t think that’s true in any sort of unqualified way—and this is important because a description of effects of a type of action is not an accusation while the presupposition that he isn’t doing something he should be doing kinda is. Secondly, the thing I described the benefits of, which he accused me of accusing him of not doing, is not a thing I said “no one does”. Plenty of people do that on plenty of occasions. Everyone *also* declines to do it in other cases, and that is not a contradiction.
The actual line I said is this:
The frame that “I know that X will happen, and I’m just saying it shouldn’t” falls apart when you look at it closely and stop allowing “should” to function as a semantic stop sign
Did he “look closely” and “stop allowing ‘should’ to function as a semantic stop sign”? Here’s his line:
you expect-2 X when you think that X should happen (more precisely, that some person/group/institution should make it happen; more precisely, that the world will be a better place according to your values or theirs if they do).
He did take the first step. You could call it two, if you want to count “this specific person is the one who should make it happen” as a separate step, but it’s not a sequential step and not really relevant. “This should happen”->”the world would be better if it did” is the only bit involving the ‘should’, and that’s a single step.
Does that count as “looking closely”? I don’t see how it can. “Looking at all”, sure, but I didn’t say “Even the most cursory look possible will reveal..”. You have to look *closely*. AND you have to “stop allowing ‘should’ to function as a semantic stopsign”.
He did think “What do I mean by that?”, and gave a first level answer to the question. But he didn’t “stop using should as a stop sign”. He still used “should”, and “should” is still a stop sign. When you say “By ‘should’, I mean ____”, what you’re doing is describing the location of the stop sign. He may have moved it back a few yards, but it’s still there, as evidenced by the fact that he used “should” and then attributed meaning to it. When you stop using should as a stopsign, there’s no more should. As in “I don’t think Chapman ‘should’ do anything. The concept is incoherent”.
It’s like being told “This thing you’re in is an airplane. If you open the throttle wide, and you resist the temptation to close it, you will pick up speed and take off”, and then thinking you’ve falsified that because you opened the throttle for three seconds seconds and the plane didn’t take off.
I expect there’s a charitable explanation for this, but I honestly don’t have one in mind.
In general it’s better to avoid talking about specific things people have done which can be interpreted as “wrong” unless you have an active reason to believe that focus will actually stay on “is it true?” rather than “who loses status if it’s true”—or unless the thing is actually “wrong” in the sense that the behavior needs to be sanctioned. It’s not that things can’t get dragged there anyway if you’re talking about the abstract principles themselves, but at least there’s a better chance of focus staying on the principles where it should be.
I was kinda hoping that by saying “Takeoff distance is generally over a quarter mile, and many runways are miles long”, you’d recognize why the plane didn’t take off without needing to address it specifically.
So, okay, I have to remember here that this thread originally came up when I felt like you’d think you knew better than me what was in my own head, and then fair play to you, you didn’t. But then you defended the possibility of doing that, without having done it in that specific case. So this bit has to be caveated with a “to the extent that you actually did the thing”, which I kind of think you did a bit with gjm and Richard but I’m not super sure right now.
Well, I was pretty careful to not comment on what Richard and gjm were doing. I didn’t accuse gjm of anything, nor did I accuse Richard of anything. I see what TAG saw. I also saw gjm respond to my “self-predictably false expectation is a failure of rationality” in the way that someone would respond if they weren’t aware of any reason to believe that other than a lack of awareness of the perspective that claims “there’s two senses of the word ‘expect’” is a solution—and in a way that I can’t imagine anyone responding if they were aware of the very good reasons that can coexist with that awareness.
I think those pieces of evidence are significant enough that dismissing them as meaningless is a mistake, so I defended TAGs decision to highlight a potential problem and I chose to highlight another myself. Does it mean that they *were* doing the things that this interpretation of the evidence points towards? Not necessarily. I also didn’t assert anything of the sort. It’s up to the individual to figure out how likely they think that is.
If, despite not asserting these things, you think you know enough about what’s going on in my mind that you can tell both my confidence level and how my reasoning doesn’t justify it, then by all means lemme know :P
But at best they’re epistemically illegible; you can’t share the evidence that makes you confident here, in a way that someone else can verify it.
I mean, not *trivially*, yeah. Such is life.
And it’s worse than that, because they’re the sort of thing I feel like people often self-delude about. Which is not to say you’re doing that; only that I don’t think I can or should rule out the possibility.
For sure, it’s definitely a thing that can happen and you shouldn’t rule it out unless you can tell that it’s not that—and if you say you can’t tell it’s not that, I definitely believe you. However, “it’s just self delusion” does make testable predictions.
So for example, say I claim to be able to predict the winning lottery numbers but it’s really just willful delusion. If you say “Oh that’s amazing! What are tomorrows numbers?”, then I’m immediately put to the choice of 1) sticking my neck out, lying, and putting a definite expiration date on having my any BS taken seriously, 2) changing my story in “unlikely” ways that show me to be dodging this specific prediction without admitting to a general lack of predicting power (“Oh, it doesn’t work on March 10ths. Total coincidence, I know. Every other day though..”), or 3) clarifying that my claims are less bold than that (“I said I can predict *better than chance*, but it’s still only a ~0.1% success rate”), and getting out of having my claims deflated by deflating them myself.
By iterating these things, you can pretty quickly drive a wedge in that separates sincere people from the delusional—though clever sociopathic liars will be bucketed with the sincere until those expiration dates start arriving. It takes on order n days to bound their power to predicting at most 1/n, but delusion can be detected as fast as anticipations can be elicited.
But as a reader, they look very similar, and I think I’m justified in reacting to them similarly.
Well, you’re justified in being skeptical, for sure. But there’s an important difference between “Could be just self delusion, I dunno..” and “*Is* just self delusion”—and I think you’d agree that the correct response is different when you haven’t yet been able to rule out the possibility that it’s legit.
There are ways to make me more disposed to believe you in situations like this, which I think roughly boil down to “make it clear that you have seen the skulls”.
For sure, there are skulls everywhere. The traps get really subtle and insidious and getting comfortable and declaring oneself “safe” isn’t a thing you ever get to do. However, it sounds like the traps you’re talking about are the ones along the lines of “failing to even check whether you anticipate it being true before saying “Pshh, you’re just saying that because you haven’t read Guns Germs and Steel. Trust me bro, read it and you’ll believe me”″ -- and those just aren’t the traps that are gonna get ya if you’re trying at all.
My point though was that there are successes everywhere too. “Seeing someone’s mind do a thing that they themselves do not see” is very very common human behavior, even though it’s not foolproof. In fact, a *really good* way to find out what your own mind is doing is to look at how other people respond to you, and to try to figure out what it is they’re seeing. That’s how you find things that don’t fit your narrative.
I’ve written another recent comment on that subject, though only parts of it are relevant here.
I get your distaste for that kind of comment, and I agree that there’s ways Val could have put in more effort to make it easier to accept. At the same time, recoiling from such things is a warning sign, and “nuggets of wisdom from above” is the last thing you want to tax.
I still remember something Val said to me years ago that had a similar vibe. In the end, I don’t think he was right, but I do think he was picking up on something and I’m glad he was willing to share the hypothesis. Certainly some other nuggets have been worth the negligible cost of listening to them.
So I don’t think the point of friction here is about whether or not I understand the mental shift. I think the point of friction is, I have no idea why you described it as “there’s only one kind of expect”.
Because there’s only one kind of expect. There’s “expecting”, and there’s “failing to expect, while pretending to be expecting and definitely not failing”. These are two distinct things, yes. Yet only the former is actually expecting.
It can seem like “I expect-2, then I introspect and things change, and I come out of it with expect-1”. As if “expect-2″ is a tool that is distinct from expect-1 and sometimes the better tool for the job, but in this case you set the former down and picked up the latter. As if in *this case* you looked closer and thought “Oh wow, I guess I was mistaken! That’s a torx bolt not an allen bolt!”.
There’s *another* mental shift though, on the meta level, which starts to happen after you do this enough.
So you keep reaching for “expect-2”, and it kinda sorta works from time to time, but *every time* you look closer, you think “Ah, this is another one of those cases where an expect-2 isn’t the right tool!”. And so eventually you start to notice that it’s curiously consistent, but you think “Well, seeing a bunch of white swans doesn’t disprove the existence of black swans! I just haven’t found the right job for this tool yet!”—or rather “All the right jobs are coincidentally the ones I haven’t examined in much detail! Because they’re so obvious!”.
Eventually you start to notice that there’s a pattern to it. It’s not just “This context is completely different, the considerations that determine which tool to use are completely different, and what a coincidence! The answer still points the same way!”. It’s “Oh, I followed the same systematic path, and ended up with the same realization. I wonder if maybe there’s something fundamental going on here?”. Eventually you get to the point where you start to look at the path itself, and recognize that what you’re doing is exposing delusion, and the things which tell you what step to take next are indicators of delusion which you’ve been following. Eventually you notice that the whole “unique flavor” that *defined* “expect-2″ is actually the flavor of delusion which you’ve been seeking out and exposing. And that the active ingredient in there, which made it kinda work when it did, has been expect-1 this whole damn time. It’s not “a totally different medicine”. It’s the same medicine mixed with horseshit.
At some point it becomes a semantic debate because you can define a sequence of characters to mean anything—if you don’t care about it being useful or referring to the same thing others use it to refer to. You could define “expect-2” as “expect-1, mixed with horse shit, and seen by the person doing it as a valid and distinct thing which is not at all expect-1 mixed with horse shit”, but it won’t be the same thing others refer to when they say “expect-2”—because they’ll be referring to a valid and distinct thing which is not at all expect-1 mixed with horse shit (even though no such thing exists), and when asked to point at “expect-2″ they will point at a thing which is in fact a combination of expect-1 and horseshit.
Like… even assuming I’ve made this mental shift, that’s not the same as just expect-1ing him to show up on time? This feels like telling me that “a coin showing heads” is the same as “a coin that I’ve just flipped but not yet looked at”, because once I look I’ll either have the first thing or I’ll be able to deal with not having it.
Expectations will shift. To start with you have a fairly even allocation of expectation, and this allocation will shift to something much more lopsided depending on the evidence you see. However, it was never actually in a state of “Should be heads, dammit”. That wasn’t a “different kind of expectation, which can be wrong-1 without being wrong-2, and was 100% allocated to heads”. Your expectation 1 was split 50⁄50 between heads and tails, and you were swearing up and down that tails wasn’t a legitimate possibility because you didn’t want it to be. That is all there is, and all there ever was.
And that’s from the perspective of the boss. Suppose I’m the employee and my boss says that to me. I can’t make the mental shift for her. It probably wouldn’t go down very well to ask “ah, but do you predict that I’ll show up on time? Because if you don’t, then you should come to terms with that and work with me to...”
Maybe if my boss did make this mental shift, then that would be good for me too. But given that she hasn’t, I kind of need to know: when she used the word “expect” there, was that expect-1 or expect-2? Telling me about a mental shift she could make in the way she relates to expectations seems unhelpful. Telling me that the two kinds of expectations are the same seems worse than useless.
Ah, but look at what you’re doing! You’re talking about telling your boss what she “should” do! You’re talking about looking away from the fact that you know damn well what she means so that you can prop up this false expectation that your boss will “come to terms with that”! *Of course* that’s not going to work!
You want to go in the opposite direction. You want to understand *exactly* what she means: “I’m having trouble expecting you to do what I want. I’m a little bothered by that. Rather than admit this, I am going to try to take it out on you if you don’t make my life easier by validating my expectations”. You want to not get hung up at the stage of “Ugh, I don’t want to have to deal with that”/”She shouldn’t do that, and I should tell her so!”, and instead do the work of updating your own maps until you no longer harbor known-false expectations and attach desires to possibilities which aren’t real.
When you’ve done that, you won’t think to say “You should come to terms with that” to your boss, even if everyone would be better off if she did, because doing so will sound obviously stupid instead of sounding like something that “should” work. What you choose to say still depends on what you end up seeing but whatever it is will feel *different* -- and quite different on the other side too.
Imagine you’re the boss putting on your serious face and telling an employee that you expect them to show up on time from now on. It’s certainly aggravating if they say “Ah, but do you mean that? You should work on that!”. But what if you put your serious face on, you say to them “Bob, I noticed that you’ve been late a couple times recently, and I expect you to be on time from now on”, and in response, Bob gives you a nice big warm smile and exclaims “I like your optimism!”.
It still calls out the same wishful thinking on the bosses part, but in a much more playful way that isn’t flinching from anything. Sufficiently shitty bosses can hissy fit about anything, but if you imagine how *you* would respond as a boss, I think you’d have a hard time not admitting to yourself “Okay, that’s actually kinda funny. He got me”, even if you try to hide it from the employee. I expect that you’d have a real hard time being mad if the employee followed up “I like your optimism!” with a sincere “I expect I will too.”. And I bet you’ll be a little more likely to pivot from “I expect!” towards something more like “It’s important that we’re on time here, can I trust that you won’t let me down?”.
A word is fundamentally a semantic stopsign when the whole purpose of the word is to cover up detail and allow you to communicate without having to address what’s underneath.
As I mentioned before, this isn’t always a problem. If someone says “I just realized I should pee before we leave”, and then goes and pees, then there really isn’t an issue there. We can still look more closely, if we want, but we aren’t going to find anything interesting that changes the moral of the story. They realized that if they don’t pee before leaving, they will end up with a full bladder and no convenient way to empty it. Does it mean they would otherwise have to waste time pulling over? That they won’t have a chance and will pee themselves? It doesn’t really matter, because it is sufficiently clear that it’s in everyone’s best interest to let the guy go pee before embarking on the road trip with him. The right answer is overconstrained here.
Similarly, “Bill should donate” can be unproblematic—if the details being glossed over don’t change the story. Sometimes they do.
If you say “Bill gates should!” and then say “Well, what I really mean by that is… that I would like it, personally, because it would make the world better according to my values”, then that changes things drastically. “Should!” has a moral imperative that “I’d personally like...” simply does not—unless it somehow really matters what you’d like. Once you get rid of “should” you have to expose the driving force behind any imperative you want to make. Is it that you just want his money for yourself? Do you have a well thought out moral argument that Bill should find compelling? If so, what is it, and why should Bill find it compelling?
Very frequently, people will run out of justification before their point becomes compelling. I have a friend, for example, who thinks “Health care ‘should’ be ‘free’”, and who gets quite grumpy if you point out her lack of an actual argument. Fundamentally, what she means is “I want healthcare, and I don’t want to pay for it”, but saying it that way would make it way too obvious that she doesn’t actually have a compelling reason why anyone should want to pay for her healthcare—so she sticks with “it should be free”. This isn’t a political statement, btw, since I’m not saying that good arguments don’t exist, or that “health care ‘shouldn’t’ be ‘free’”. It’s just that she wants the world to be a certain way that would be convenient to her, and the way things currently are violate a “fairness” intuition she has, so she’s upset about it without really understanding what to do about it. She doesn’t see any reason that everyone else would feel compelling, and so she moralizes in the hopes that the justification is either intuitively obvious to everyone else, or else that people will care that she feels that way.
And that’s an empirical prediction. If you say “You should do X” and “expect-2“ at them to do it, and they do, then clearly your moralizing had sufficient force and you were right to think you could stop at that level of detailed support. If you start expecting at your dog to sit, and you’ve never taught it to sit, then there’s just no way to fill in the details behind “the dog should sit” which make any sense. “The world would be better if it sat”—sure, let’s grant that. What do you think you’re accomplishing by announcing this fact instead of teaching the dog to sit? Notice how that statement is oddly out of place? Notice how the “should” and “expectation-2” kinda deflate once you recognize that the expectation will necessarily be falsified?
Returning to the “irrational fear” example, the statement is “I shouldn’t be afraid”. If you follow that up with a lack of fear, then fine. Otherwise, you have a contradiction. Get rid of the “should”, and see what happens. “I shouldn’t be afraid” → “I feel fear even though there is no danger”. Oh, there’s no danger? Now that you’re making this claim explicitly, how do you know? How come it looks like you think you’re going to splat your head on the concrete below? What does your brain anticipate happening, and why is it wrong? Have you checked to make sure the concrete anchors have been set properly? Are you sure you aren’t missing something that could lead to your head splatting on the concrete below?
When you can confidently answer “Yes, I have checked the knots, I have checked the anchors, and there is no way my head will splat on the concrete below. My brain was anticipating falling without any actual cause, and I can see now that there are no paths to that outcome”, then how do you maintain the fear? What are you afraid of, if that can’t happen? It’s like trying to say “I don’t believe it’s raining, but it is raining”. Even if you feel the same “fear” sensations, once you’re confident that they don’t mean anything it just becomes “I feel these sensations which don’t mean anything”. Okay, so what? If you’re sure they don’t mean anything then go climb. We call that “excitement”, btw.
When you “should” or “expect” at a thing, and your expectations are being falsified rather than validated, then that’s the cue to look deeper. It means that the stuff being buried underneath isn’t working out like you think it should, so you’re probably wrong about something. If you’re trying and failing to rock climb without fear, it probably means you were flinching away from actually addressing the dangers and that you need to check your knots, check whether you’ve done enough checking, and then once you do that you will find yourself climbing without being burdened by fear. If you’re trying to say that someone should do something you want them to do and they aren’t doing it, it probably means you have a gap in your model about why they would care or else how they would know—and once you figure that out you’ll find yourself happily explaining more, or creating a reason for them to care, or realizing that your emotions had you acting out of line—depending on the case at hand.
That sorta make sense? I know it’s a bit far from intuitive
It makes sense as, like, a discussion of “this is sometimes what’s going on when people use the word should”. I’m far from convinced that that’s always what’s going on, or that it’s what’s going on in this particular situation.
Like, I feel like you’re taking something that should be at the level of “this is a hypothesis to keep in mind” and elevating it to “this is what’s true”.
(Oh hey, I used “should”. What do I mean by that? I guess kind of the same as if I said “if you add this list of numbers, you should get zero”. That is, I feel like you’re making a mistake to hold this thing at a level of confidence different from what I think is the correct level of confidence. Was there more to my “should” than that? Quite possibly, but… I feel like you’re going to have a confident prediction about what that was? And I want to point out that while it’s possible you know better than me what’s going on inside my head, it’s not the default guess.)
I guess I also want to point out that the sequence of events here is, in part:
Richard says a thing.
TAG and then yourself use the word “expect” to suggest Richard was being unreasonable.
gjm uses the word “should” in a reply to your “expect” to suggest Richard was perhaps being reasonable after all.
Big discussion about the words “expect” and “should”.
Notably, Richard never used either of those words himself. So for example, you say “When you “should” or “expect” at a thing, and your expectations are being falsified rather than validated, then that’s the cue to look deeper.” Well, is Richard “should”ing or “expect”ing? I could believe he’s doing the thing just with different words, but I don’t think it’s been established that he is, and any discussion of “this is what these words mean” is completely besides the point.
Not that I have anything against long besides-the-point digressions, but I do think it’s good for everyone to be aware that’s what they are.
(Gonna limit myself to two more replies after this, and depending on motivation I might not even do that many.)
The hypothesis “The word ‘should’ is being used to allow communication while motivatedly covering up detail that is necessary to address” is simply one hypothesis to keep in mind, and doesn’t apply to every use of the word “should”. However “The word still functions to allow communication without having to get into further detail” is just something that is always true. What would a counterexample even look like?
I’ve tried to be explicit in the last two comments that this isn’t always a bad thing. Your use of the word “should” here seems pretty reasonable to me. That doesn’t mean that there isn’t more detail being hidden (mistake according to what values?), just that we more or less expect that the remaining ambiguity isn’t likely to be important so stopping at this level of precision is appropriate.
I feel like I’m kinda saying the same thing as last time though. Am I missing what your objection is? Do you see why “semantic stopsign” shouldn’t be seen as a boo light?
This does highlight a potential failure mode though. Determining which level of confidence is “correct” for someone else requires you to actively know something about what they’ve seen and what they’d be able to see. It’s pretty hard to justify until you can see them failing to see something.
Not in this case, no. Because it’s a fairly reasonable use there’s no sign of failure, and therefore nothing to suggest what you might be doing without realizing you’re doing.
If your answer to 54+38 is 92, I don’t have any way of knowing how you got there other than it worked. Maybe you used a calculator, or maybe you had a lucky guess. If you say 82, then I can make an educated guess that you used the method they teach in elementary school and forgot to carry the one. If you get 2052, I can guess pretty confidently that you used a calculator and hit the “x” button instead of the “+” button.
“Knowing what’s going on inside someones head better than they do” just means recognizing failure modes that they don’t recognize.
Richard is not on trial. He didn’t do anything anti-social that calls for such a trial. It would be presumptuous to speak for him, and inappropriately hostile to accuse him. I’m uncomfortable with the implication that this is what the “point” is, and perhaps should have disclaimed this explicitly early on. Heck, his main point isn’t even wrong or unreasonable.
It’s just that the fact that he felt it to be necessary to say suggests something about his unspoken expectations—because Chapman’s/TAG’s expectations wouldn’t have led to that comment feeling relevant even though it’s still fairly true. Productive conversation requires addressing the actual disagreement rather than talking past each other, and when those disagreements are buried in the underlying expectations, this means pointing the conversation there. That’s why TAG basically asked “What do you expect?”, and why it was the correct thing to do given the signs of expectation mismatches. Gjm responded to this by saying that “expect doesn’t always mean expect”, which is an extremely common misconception, and understanding why that is wrong is important—not just for the ability to have these kinds of discussions productively, but also that.
Hm. So I thought you were referring to a word like “cult” or “fascist”, where part of what’s going on is mixing normative and descriptive claims in a way that obfucates. But now it seems you might have meant a word like “tree” or “lawyer” or “walk”; that is, that just about every word is a semantic stopsign?
And then when you say “stop allowing “should” to function as a semantic stop sign”, you mean “dig below the word “should” to its referent”, much as I might do to explain the words “tree” or “lawyer” or “walk” to someone unfamiliar with them?
But as gjm and I have both noted, he did that. Like, it sounds like that part of the conversation went: “you should do X” / “I did X” / “this isn’t about you, no one does X”.
I confess I do not understand this.
(It’s not that I thought you’d necessarily see my “should” above as bad. I just expected you’d think there was more going on with it than I thought was going on with it.)
Not sure if this part is particularly relevant given my apparent misunderstanding of “semantic stopsign”. But to be clear, I meant a factual mistake.
Well, yes, and like I said it’s possible. And in the example you gave, I agree those would be good guesses.
But also, it seems common to me that someone will think they recognize someone a failure mode in someone else, that that person doesn’t recognize; and that they’ll be wrong. Things like, “oh, the reason you support X is because you haven’t read Y”. Or “I would have liked that film if I hadn’t noticed the subtext; so when you say you liked that film, you must not have noticed the subtext; so when I point out the subtext you will stop liking the film”.
Something you said pattern matches very strongly to this, for me: “Heh, I understand that perspective. … I distinctly remember the conversation where I was insisting that...”.
It’s entirely possible that your framing there is priming me to read things into your words that you aren’t putting there yourself, and if I’m doing that then I apologize.
Well, honestly I’m still not convinced this is wrong. I had thought that getting into “should” might help clear this up, but it hasn’t, so.
It just doesn’t seem at all problematic to me that “I expect you to show up on time from today on” and “I expect the sun to rise tomorrow” are using two different senses of “expect”. It sounds like you think something like… if the speaker of the first one meditates on wanting, they will either anticipate that the listener will show up on time, or they will stop caring whether the listener shows up on time? I’m guessing that’s not a good description of what you think, but that’s what I’m getting.
Now to be clear, I wouldn’t describe this “expect” the same way gjm described “expect-2″. He made the distinction: “you expect-1 X when you think X will probably happen; you expect-2 X when you think that X should happen”. And I think I’d change expect-2 to something like “when you think that X should happen, and are low-key exercising some authority in service of causing X to happen”. Like “I expect you to show up on time” sounds to me like an order backed by corporate hierarchy, and “England expects every man will do his duty” sounds like an order backed by the authority of the crown. “I expect open source developers to be prompt at responding to bug reports” sounds like it’s exercising moral authority. And if we make this distinction, then it does not seem to me like Richard was expect-2ing anything of Chapman.
But that doesn’t seem particularly relevant, because:
So I agree this seems like the sort of thing that’s important-in-general to know about. If the word “expect” has only one common meaning, then I certainly want to know that; and if it has two, then I expect you want to know that.
But it still doesn’t seem like it matters in this specific case. This conversation stems from hypothesizing about what’s going on inside Richard’s head, and he didn’t use the word in question. So like,
“Richard is expecting ___” / “That seems like a fine thing for him to do, because “expect” can also mean ___” / “No it can’t, because...”
It seems like the obvious thing to do here, if we’re going to hypothesize about what’s going on in Richard’s head, is to just stop using the word “expect”? Going into “what does “expect” mean” seems like the opposite of productive disagreement.
For sake of brevity I’m going to respond just to the parts I see as more likely to be fruitful, but feel free to demand a response to anything I skip over and I’ll give one.
Yes, closer to the latter. There’s always more underneath, even with words like “tree”.
However, the word “should” is a bit different, in ways we touch on below.
Hm, okay.
And maybe there is. “Factual mistake” isn’t perfectly defined either. We could get further into the ambiguities there, but it’s all going to feel “yeah but that doesn’t matter” because it doesn’t. It’s defined well enough for our purposes here.
The important distinction to track here is whether the person is closing the loop or just saying whatever first comes to mind with no accountability. When the prediction fails, is there surprise and an update? Or do the goalposts keep moving and moving? The latter is obviously common and almost always leads to wrongness, but that isn’t a mark on the former which actually works pretty well.
I can see what you mean, but it’s pretty different. Fixed goal posts, consistent experience of observing minds changing when they reach them, known base rates, calibrating to the specifics of what is and isn’t said, etc. I can explain if you want.
No worries. I can tell you’re working in entirely good faith here. I’m not confident that I can convey what I’d like to convey in the amount of effort you’re willing to put into this conversation, but if I can’t it’s definitely because I’ve failed to cross the inferential distance and not because your mind isn’t open.
“Convinced” is a high bar, and we have spent relatively few words on the topic. Really grokking this stuff requires “doing” rather than just “talking about”. Meaning, actually playing one or both sides of “attempting to hold onto the frame where a failing ‘should’/‘expect-2’ is logically consistent and not indicative of wrongness” and “systematically tearing that frame apart by following the signs to the missing truth”. And then doing it over and over over a wide range of things and into increasingly counterintuitive areas, until the idea that “Maybe this time it’ll be different!” stops feeling realistic and starts feeling like a joke. Working through each example usually takes an hour or two of back and forth until all the objections are defeated and the end result recognized as inevitable rather than merely “plausible”.
I’d count it a success if you walk away skeptical, but with a recognition that you can’t rule it out either, and a good enough sketch that you can start filling in the details.
Yes! Not quite, but close!
So yes, those two are different. And yes, “low-key exercising authority” is a key distinction to make here. However, it’s not the case that expecting your employee to show up on time is simultaneously “low-key exercising authority” AND “not a prediction”. It’s either still a prediction, or it’s not exercising authority. The mechanism of exercising authority is through predicting people will do as you direct them to, and if you lose that then you don’t actually have authority and are simply engaging in make believe.
This is a weird concept, but “intentions” and “expectations” are kinda the same thing related to differently. This is why your mom could tell you “You are going to start behaving right now!*” and you don’t get confused why she’s giving an order as if it’s a prediction. It’s why your coach in high school would say “You have to believe you can win!”, and why some kids really did choke under pressure and under-perform relative to what they were otherwise capable of. When it comes to predicting whether you’ll get a glass of water when you’re thirsty, you can trivially realize either prediction, so you solve this ambiguity by choosing to predict you’re going to get what you want and act so as to create that reality. If you want to start levitating a large object with your mind, you can’t imagine that working so it gets really hard to even intend to do it. That’s the whole “use the try harder Luke” stuff. When it gets hard to expect success, it gets hard to even try. (Scott’s writing on “predictive processing” touches on this equivalence)
If you’ve been trying to low-key authority at someone to show up on time, and then you start looking real closely at what you’re doing, one potential outcome is that you simply anticipate they’ll show up, yes. In this case, it’s like… think of the difference between “I expect myself to get up and run a mile today” when you really don’t wanna and you can feel the tension that exercising authority is creating and you’re not entirely sure it’ll keep working… and then compare that to what it feels like when “run a mile” is just what you do, like getting a glass of water when you’re thirsty, or brushing your teeth in the morning (hopefully). It may still suck, and you may not *like* running, but you notice your feet start walking out the door almost “on their own” because “not running” isn’t actually a thing anymore. Any tension there is the uncertainty you’re trying to deny in order to insist reality bend to your will, and when you look closely and find out that it’s definitely gonna happen, it goes away because you’re not uncertain anymore.
In the other extreme when you find that it’s definitely not gonna happen, you “stop caring” in the sense that you no longer get bothered by it, but not in the sense that you’d no longer appreciate the guy showing up on time, and not in the sense that you stop exerting optimization pressure in that direction. It actually frees you up to optimize a lot more in that direction, because you’re no longer navigating by a bad map, you no longer come off as passive aggressive or aggressive and lacking in empathy, and you’re not bound to expecting success before you do something. So for example, that recent case I referenced involved me feeling annoyed by an acquaintance’s condescending douchery. Once I looked at where I was going wrong (why he was the way he was, why my annoyance had no authority over him, etc), I no longer “cared” in the sense that his behavior didn’t annoy me anymore. But also, that lack of annoyance opened up room for me to challenge and tease him without it being perceived as (or being) an ego threat, and now I actually like the guy and rather than condescending to me he regularly asks for my input on things (even though his personality limitations are still there).
In the middle, you realize you don’t actually know whether or not they’re going to start showing up on time. Instead of asserting “I expect!” hoping for the best, you realize that you can’t “decide” what they do, but they can, so you ask: “Do you think you’re going to start coming in on time?”. And you wait for an answer. And you look to see what this means they will actually do. This feels very different from the other side. Instead of feeling like you’re being projected at, it feels like you’re being seen. You can’t just “say” you will, because your boss is no longer looking to see if you’ll prop up his semi-delusional fantasy a little longer; he’s looking to see what you will do. Instead of being pushed into a role where you grumble “Yes sir..” because you have no choice and having things happen to you that are out of your control, the weight of the decision is on your shoulders, and you feel it. Are you going to start showing up on time?
I’m not going to demand anything, especially when I don’t plan to reply again after this. (Or… okay, I said I’d limit myself to two more replies but I’m going to experiment with allowing myself short non-effortful ones if I’m able to make them. Like, if you want to ask questions that have simple answers I’m not going to rule out answering them. But I am still going to commit to not putting significant effort into further replies.)
But the thing that brought me into this conversation was the semantic stop sign thing. It still seems to me like that part of the conversation went “you should do X” / “I did X” / “this isn’t about you, no one does X”. And based on my current understanding of what you meant by “semantic stopsign”, I agree that gjm didn’t do X, and it feels like you’ve ignored both him and myself trying to point this out.
I expect there’s a charitable explanation for this, but I honestly don’t have one in mind.
Mm, I think I know what you mean, but… I don’t think I trust that you’re at that level?
So, okay, I have to remember here that this thread originally came up when I felt like you’d think you knew better than me what was in my own head, and then fair play to you, you didn’t. But then you defended the possibility of doing that, without having done it in that specific case. So this bit has to be caveated with a “to the extent that you actually did the thing”, which I kind of think you did a bit with gjm and Richard but I’m not super sure right now.
As I said, I agree it’s possible to know what’s going on in someone else’s mind better than they are. I agree that the things you say here make it more likely than otherwise.
But at best they’re epistemically illegible; you can’t share the evidence that makes you confident here, in a way that someone else can verify it. And it’s worse than that, because they’re the sort of thing I feel like people often self-delude about. Which is not to say you’re doing that; only that I don’t think I can or should rule out the possibility.
So these situations may seem very different to you, and you may be right. But as a reader, they look very similar, and I think I’m justified in reacting to them similarly.
There are ways to make me more disposed to believe you in situations like this, which I think roughly boil down to “make it clear that you have seen the skulls”. I’ve written another recent comment on that subject, though only parts of it are relevant here.
No good thing to quote here, but re expecting: I feel like you’re saying “these are the same” and then describing them being very different.
So sure, I expect-2 my employee to show up on time, and then I do this mental shift. Then either I expect-1 him to show up on time; or I realize I don’t expect-1 him to show up on time, and then I can deal with that.
And maybe this is a great mental shift to make. Actually I’d say I’m pretty bullish on it; this feels to me more like “oh, you mean that thing, yeah I like that thing” than like “oh, that’s a thing? Huh” or “what on earth does that mean?”
So I don’t think the point of friction here is about whether or not I understand the mental shift. I think the point of friction is, I have no idea why you described it as “there’s only one kind of expect”.
Like… even assuming I’ve made this mental shift, that’s not the same as just expect-1ing him to show up on time? This feels like telling me that “a coin showing heads” is the same as “a coin that I’ve just flipped but not yet looked at”, because once I look I’ll either have the first thing or I’ll be able to deal with not having it. Or that “a detailed rigorous proof” is the same as “a sketch proof that just needs filling out”, because once I fill out the details I’ll either have the first thing or I’ll be able to deal with the fact that my proof was mistaken.
And that’s from the perspective of the boss. Suppose I’m the employee and my boss says that to me. I can’t make the mental shift for her. It probably wouldn’t go down very well to ask “ah, but do you predict that I’ll show up on time? Because if you don’t, then you should come to terms with that and work with me to...”
Maybe if my boss did make this mental shift, then that would be good for me too. But given that she hasn’t, I kind of need to know: when she used the word “expect” there, was that expect-1 or expect-2? Telling me about a mental shift she could make in the way she relates to expectations seems unhelpful. Telling me that the two kinds of expectations are the same seems worse than useless.
“Demand” is just a playful way of saying it. Feel free to state that you think what I skipped over is important as well. Or not.
I’m confused. I assume you meant to say that you agree with gjm that he *did* do X, and not that you agree with me that he didn’t?
Anyway, “You should do X”/”I did X”/”No one does X” isn’t an accurate summary. To start with, I didn’t say he *should* do anything, because I don’t think that’s true in any sort of unqualified way—and this is important because a description of effects of a type of action is not an accusation while the presupposition that he isn’t doing something he should be doing kinda is. Secondly, the thing I described the benefits of, which he accused me of accusing him of not doing, is not a thing I said “no one does”. Plenty of people do that on plenty of occasions. Everyone *also* declines to do it in other cases, and that is not a contradiction.
The actual line I said is this:
Did he “look closely” and “stop allowing ‘should’ to function as a semantic stop sign”? Here’s his line:
He did take the first step. You could call it two, if you want to count “this specific person is the one who should make it happen” as a separate step, but it’s not a sequential step and not really relevant. “This should happen”->”the world would be better if it did” is the only bit involving the ‘should’, and that’s a single step.
Does that count as “looking closely”? I don’t see how it can. “Looking at all”, sure, but I didn’t say “Even the most cursory look possible will reveal..”. You have to look *closely*. AND you have to “stop allowing ‘should’ to function as a semantic stopsign”.
He did think “What do I mean by that?”, and gave a first level answer to the question. But he didn’t “stop using should as a stop sign”. He still used “should”, and “should” is still a stop sign. When you say “By ‘should’, I mean ____”, what you’re doing is describing the location of the stop sign. He may have moved it back a few yards, but it’s still there, as evidenced by the fact that he used “should” and then attributed meaning to it. When you stop using should as a stopsign, there’s no more should. As in “I don’t think Chapman ‘should’ do anything. The concept is incoherent”.
It’s like being told “This thing you’re in is an airplane. If you open the throttle wide, and you resist the temptation to close it, you will pick up speed and take off”, and then thinking you’ve falsified that because you opened the throttle for three seconds seconds and the plane didn’t take off.
In general it’s better to avoid talking about specific things people have done which can be interpreted as “wrong” unless you have an active reason to believe that focus will actually stay on “is it true?” rather than “who loses status if it’s true”—or unless the thing is actually “wrong” in the sense that the behavior needs to be sanctioned. It’s not that things can’t get dragged there anyway if you’re talking about the abstract principles themselves, but at least there’s a better chance of focus staying on the principles where it should be.
I was kinda hoping that by saying “Takeoff distance is generally over a quarter mile, and many runways are miles long”, you’d recognize why the plane didn’t take off without needing to address it specifically.
Well, I was pretty careful to not comment on what Richard and gjm were doing. I didn’t accuse gjm of anything, nor did I accuse Richard of anything. I see what TAG saw. I also saw gjm respond to my “self-predictably false expectation is a failure of rationality” in the way that someone would respond if they weren’t aware of any reason to believe that other than a lack of awareness of the perspective that claims “there’s two senses of the word ‘expect’” is a solution—and in a way that I can’t imagine anyone responding if they were aware of the very good reasons that can coexist with that awareness.
I think those pieces of evidence are significant enough that dismissing them as meaningless is a mistake, so I defended TAGs decision to highlight a potential problem and I chose to highlight another myself. Does it mean that they *were* doing the things that this interpretation of the evidence points towards? Not necessarily. I also didn’t assert anything of the sort. It’s up to the individual to figure out how likely they think that is.
If, despite not asserting these things, you think you know enough about what’s going on in my mind that you can tell both my confidence level and how my reasoning doesn’t justify it, then by all means lemme know :P
I mean, not *trivially*, yeah. Such is life.
For sure, it’s definitely a thing that can happen and you shouldn’t rule it out unless you can tell that it’s not that—and if you say you can’t tell it’s not that, I definitely believe you. However, “it’s just self delusion” does make testable predictions.
So for example, say I claim to be able to predict the winning lottery numbers but it’s really just willful delusion. If you say “Oh that’s amazing! What are tomorrows numbers?”, then I’m immediately put to the choice of 1) sticking my neck out, lying, and putting a definite expiration date on having my any BS taken seriously, 2) changing my story in “unlikely” ways that show me to be dodging this specific prediction without admitting to a general lack of predicting power (“Oh, it doesn’t work on March 10ths. Total coincidence, I know. Every other day though..”), or 3) clarifying that my claims are less bold than that (“I said I can predict *better than chance*, but it’s still only a ~0.1% success rate”), and getting out of having my claims deflated by deflating them myself.
By iterating these things, you can pretty quickly drive a wedge in that separates sincere people from the delusional—though clever sociopathic liars will be bucketed with the sincere until those expiration dates start arriving. It takes on order n days to bound their power to predicting at most 1/n, but delusion can be detected as fast as anticipations can be elicited.
Well, you’re justified in being skeptical, for sure. But there’s an important difference between “Could be just self delusion, I dunno..” and “*Is* just self delusion”—and I think you’d agree that the correct response is different when you haven’t yet been able to rule out the possibility that it’s legit.
For sure, there are skulls everywhere. The traps get really subtle and insidious and getting comfortable and declaring oneself “safe” isn’t a thing you ever get to do. However, it sounds like the traps you’re talking about are the ones along the lines of “failing to even check whether you anticipate it being true before saying “Pshh, you’re just saying that because you haven’t read Guns Germs and Steel. Trust me bro, read it and you’ll believe me”″ -- and those just aren’t the traps that are gonna get ya if you’re trying at all.
My point though was that there are successes everywhere too. “Seeing someone’s mind do a thing that they themselves do not see” is very very common human behavior, even though it’s not foolproof. In fact, a *really good* way to find out what your own mind is doing is to look at how other people respond to you, and to try to figure out what it is they’re seeing. That’s how you find things that don’t fit your narrative.
I get your distaste for that kind of comment, and I agree that there’s ways Val could have put in more effort to make it easier to accept. At the same time, recoiling from such things is a warning sign, and “nuggets of wisdom from above” is the last thing you want to tax.
I still remember something Val said to me years ago that had a similar vibe. In the end, I don’t think he was right, but I do think he was picking up on something and I’m glad he was willing to share the hypothesis. Certainly some other nuggets have been worth the negligible cost of listening to them.
Because there’s only one kind of expect. There’s “expecting”, and there’s “failing to expect, while pretending to be expecting and definitely not failing”. These are two distinct things, yes. Yet only the former is actually expecting.
It can seem like “I expect-2, then I introspect and things change, and I come out of it with expect-1”. As if “expect-2″ is a tool that is distinct from expect-1 and sometimes the better tool for the job, but in this case you set the former down and picked up the latter. As if in *this case* you looked closer and thought “Oh wow, I guess I was mistaken! That’s a torx bolt not an allen bolt!”.
There’s *another* mental shift though, on the meta level, which starts to happen after you do this enough.
So you keep reaching for “expect-2”, and it kinda sorta works from time to time, but *every time* you look closer, you think “Ah, this is another one of those cases where an expect-2 isn’t the right tool!”. And so eventually you start to notice that it’s curiously consistent, but you think “Well, seeing a bunch of white swans doesn’t disprove the existence of black swans! I just haven’t found the right job for this tool yet!”—or rather “All the right jobs are coincidentally the ones I haven’t examined in much detail! Because they’re so obvious!”.
Eventually you start to notice that there’s a pattern to it. It’s not just “This context is completely different, the considerations that determine which tool to use are completely different, and what a coincidence! The answer still points the same way!”. It’s “Oh, I followed the same systematic path, and ended up with the same realization. I wonder if maybe there’s something fundamental going on here?”. Eventually you get to the point where you start to look at the path itself, and recognize that what you’re doing is exposing delusion, and the things which tell you what step to take next are indicators of delusion which you’ve been following. Eventually you notice that the whole “unique flavor” that *defined* “expect-2″ is actually the flavor of delusion which you’ve been seeking out and exposing. And that the active ingredient in there, which made it kinda work when it did, has been expect-1 this whole damn time. It’s not “a totally different medicine”. It’s the same medicine mixed with horseshit.
At some point it becomes a semantic debate because you can define a sequence of characters to mean anything—if you don’t care about it being useful or referring to the same thing others use it to refer to. You could define “expect-2” as “expect-1, mixed with horse shit, and seen by the person doing it as a valid and distinct thing which is not at all expect-1 mixed with horse shit”, but it won’t be the same thing others refer to when they say “expect-2”—because they’ll be referring to a valid and distinct thing which is not at all expect-1 mixed with horse shit (even though no such thing exists), and when asked to point at “expect-2″ they will point at a thing which is in fact a combination of expect-1 and horseshit.
Expectations will shift. To start with you have a fairly even allocation of expectation, and this allocation will shift to something much more lopsided depending on the evidence you see. However, it was never actually in a state of “Should be heads, dammit”. That wasn’t a “different kind of expectation, which can be wrong-1 without being wrong-2, and was 100% allocated to heads”. Your expectation 1 was split 50⁄50 between heads and tails, and you were swearing up and down that tails wasn’t a legitimate possibility because you didn’t want it to be. That is all there is, and all there ever was.
Ah, but look at what you’re doing! You’re talking about telling your boss what she “should” do! You’re talking about looking away from the fact that you know damn well what she means so that you can prop up this false expectation that your boss will “come to terms with that”! *Of course* that’s not going to work!
You want to go in the opposite direction. You want to understand *exactly* what she means: “I’m having trouble expecting you to do what I want. I’m a little bothered by that. Rather than admit this, I am going to try to take it out on you if you don’t make my life easier by validating my expectations”. You want to not get hung up at the stage of “Ugh, I don’t want to have to deal with that”/”She shouldn’t do that, and I should tell her so!”, and instead do the work of updating your own maps until you no longer harbor known-false expectations and attach desires to possibilities which aren’t real.
When you’ve done that, you won’t think to say “You should come to terms with that” to your boss, even if everyone would be better off if she did, because doing so will sound obviously stupid instead of sounding like something that “should” work. What you choose to say still depends on what you end up seeing but whatever it is will feel *different* -- and quite different on the other side too.
Imagine you’re the boss putting on your serious face and telling an employee that you expect them to show up on time from now on. It’s certainly aggravating if they say “Ah, but do you mean that? You should work on that!”. But what if you put your serious face on, you say to them “Bob, I noticed that you’ve been late a couple times recently, and I expect you to be on time from now on”, and in response, Bob gives you a nice big warm smile and exclaims “I like your optimism!”.
It still calls out the same wishful thinking on the bosses part, but in a much more playful way that isn’t flinching from anything. Sufficiently shitty bosses can hissy fit about anything, but if you imagine how *you* would respond as a boss, I think you’d have a hard time not admitting to yourself “Okay, that’s actually kinda funny. He got me”, even if you try to hide it from the employee. I expect that you’d have a real hard time being mad if the employee followed up “I like your optimism!” with a sincere “I expect I will too.”. And I bet you’ll be a little more likely to pivot from “I expect!” towards something more like “It’s important that we’re on time here, can I trust that you won’t let me down?”.