I agree with Richard: there are perfectly good answers to this, which do in fact involve providing “at least something” concrete. For instance, someone helpful and fairly verbose might say:
“It’s a bit complicated, because the university has colleges and academic departments and they’re spread out all over the city. But from here I can show you some things you might be interested in. Over there is the Senate House: that’s where the governing body of the university has its meetings. On that side you can see King’s College—you might recognize its chapel—and over on the other side is a less famous college, Gonville & Caius, which you probably haven’t heard of but it’s where Stephen Hawking was a fellow. The big church behind us isn’t part of the university but it is associated with the university, and some of the regulations students have to obey say things like they have to be within 5 miles of this church for so many days per year. The academic departments—things like history, mathematics and so on—generally don’t live in beautiful historic buildings, and in any case you can’t see any of them from here, but if you want to see one I think the nearest to here is if you go along the street, past King’s and St Catharine’s colleges and what used to the the Cambridge University Press, and turn right down Silver Street: on your right just before the river is the Department of Sociology, which used to be Pure Mathematics. It’s nothing much to look at, though. If you want to know where everything is, I can show you a lot of it on a map, but there are bits of university all over the city, especially in the centre. Or if you just want to see some of the highlights, if you walk the length of this street starting at the far end that way, you’ll see a bunch of the most famous colleges: St John’s, Trinity, Gonville & Caius, King’s, St Catharine’s, Pembroke, and Peterhouse. Trinity is the biggest and richest. Peterhouse is the smallest and oldest.”
The sort of answer Richard is complaining about would go more like this:
“Well, the University is not the same thing as one of its colleges, or the same thing as one of its departments. Indeed, the university is not the same thing as all of its colleges or all of its departments. You might say that the university is the totality of all the teaching and research it does, but that isn’t really it either. The university is all around you, but if you aren’t part of it you probably can’t see it. Trinity College has more Nobel prizewinners than any of the others, but that doesn’t mean it’s where the university really is, and people at Trinity actually have rather a reputation for thinking the world revolves around them. The University of Cambridge is one of the world’s greatest academic institutions.”
… which isn’t wrong and points out some things that the other very concrete sort of answer ignores or glosses over—e.g., what a university actually is—but doesn’t do anything to answer the question the tourist is trying to ask.
It’s reasonable to want such a thing, but David is quite explicit about the fact that he hasn’t (yet) given Richard what he wants.
So… how do you learn meta-rationality? Mostly, at this point in history, by figuring it out for yourself; or through apprenticeship, if you are lucky.
There’s no textbook, no college course, no training program. All those may be possible. In the Cells of the Eggplant is meant to be something like a textbook—but in early 2022, as I write this essay, the unfinished book is mostly promises, not explanations
He is also explicit about why it’s not as straight forward as one might think to give that type of answer.
You can’t fully understand what meta-rationality’s subject matter is until you can be meta-rational—just as you can’t fully understand what rationality means until you are rational. Meta-rationality doesn’t have principles. It is partly about the nature and functions of principles, and how to use them skillfully according to context. Meta-rationality isn’t about solving problems. It is partly about finding and choosing and formulating problems.
If you think it’s not actually that hard, then you can try giving a better answer yourself. If you think his intended audience already knows “what a university is” or else doesn’t need to know before usefully parsing an answer to “where” that isn’t simply a location, then you can make those arguments too. There are definitely ways to make criticisms that address what Chapman is saying about what he’s not saying and why.
When the response is “He only did [the thing he said he was doing]”, and it is framed as criticism rather than “as duh, why am I even saying this”, then it does call for reevaluating the expectations themselves. If the expectations were accurate there’d be no complaints, so they’re clearly not good expectations. And they didn’t come from Chapman, who explicitly disclaimed them in this post, so it’s not like it’s any evidence against what he’s saying. At that point, “What do you expect, and what makes you think Chapman not meeting your expectations is a problem with Chapman rather than a problem with your expectations?” is an entirely appropriate place to direct attention.
I have two suspicions and it’s difficult to distinguish between them.
There’s less to meta-rationality than meets the eye, because the insights, abilities, etc. that it actually provides are not in fact new but are things that many competent rationalists are already deploying.
There’s less to meta-rationality than meets the eye, because actually “there’s no there there” at all: all “meta-rationality” is is a habit of looking down on rationalists.
For the avoidance of doubt, that’s an enumeration of my suspicions and I am not intending to rule out a third possibility, that
Meta-rationality really is a thing, its practitioners really are more insightful, more effective, etc., than anyone who practises rationality and doesn’t explicitly think in terms of meta-rationality (or at least more effective than those people would be if they didn’t), and either it’s just really difficult to explain clearly or else its proponents prefer not to for some reason.
I suspect that actually there are elements of all three. At any rate, I neither know nor profess to know exactly what combination of them may be in play, which means I’m not in a position to “give a better answer [my]self”.
(But one thing I am fairly sure is not true is that LW-rationalists as such haven’t noticed, or that LW-rationality as such doesn’t acknowledge, such elementary observations as “effective reasoning involves working out how to solve problems and not just learning stereotyped ways to solve specific preordained problems” and “things happen in contexts and you should pay attention to those” and “when solving a problem, you should also consider whether you should actually be solving a different problem” and “sometimes the problems you’re presented with are not very clearly defined”, and to whatever extent “meta-rationality” is supposed to be distinguished from What We Do Around Here by recognizing this sort of thing I think there are straw men being erected.)
What do you expect, and what makes you think Chapman not meeting your expectations is a problem with Chapman rather than a problem with your expectations?
“Expect” means two different things: you expect-1 X when you think X will probably happen; you expect-2 X when you think that X should happen (more precisely, that some person/group/institution should make it happen; more precisely, that the world will be a better place according to your values or theirs if they do).
If someone says “I am not going to give you clear answers about this” and proceeds not to give clear answers, then for sure you shouldn’t expect-1 that they will give you clear answers. But you could still think that they should; you could still think that if they don’t then what they say isn’t very useful, or that if they don’t it indicates that they’re not being honest somehow.
Consider the opening of Chapman’s Eggplant. Chapman suggests, though he doesn’t quite claim explicitly, that the techniques he’s going to be trying to teach are what distinguishes the people whose extraordinary effectiveness in technical fields looks like magic; what enables them to do things that seem “exciting, magic, an incomprehensible breakthrough”. He says that up to now this sort of ability has had to be learned “through apprenticeship and experience” … but that “this book is the first practical introduction”.
I think it is reasonable to ask the question: has Chapman in fact presented us with (1) any evidence that techniques he understands and we muggles don’t (but, under his tutelage, maybe could) could in fact elevate us to that level if we aren’t there already, or (2) an actual “practical introduction” that will enable (more than a minuscule fraction of) us to do such things? And I think it’s clear that the answer so far is no.
Now, of course there’s nothing wrong with not having finished something yet. But if I were writing a book that promised to teach its readers to do magic, and it contained as yet no information about how to do magic and no evidence that they will ever be able to do magic, I would put prominent disclaimers and warnings to that effect right beside the bit where it makes those promises. And if I started writing such a book, wrote all the bits that make those promises, put it on the internet, and somehow never got around to writing the bits that actually teach the reader to do magic, then I think I would deserve to face a fair bit of skepticism.
You seem to frame this as either there being advanced secret techniques, or it just being a matter of common sense and wisdom and as good as useless. Maybe there’s some initial value in just trying to name things more precisely though, and painting a target of “we don’t understand this region that has a name now nearly as well as we’d like” on them. Chapman is a former AI programmer from the 1980s, and my reading of him is that he’s basically been trying to map the poorly understood half of human rationality whose difficulty blindsided the 20th century AI programmers.
And very smart and educated people were blindsided when they got around to trying to build the first AIs. This wasn’t a question of charlatans or people lacking common sense. People really didn’t seem to break rationality apart into the rule-following (“solve this quadratic equation”) and pattern-recognition (“is that a dog?”) parts, because up until the 1940s all rule-based organizations were run solely by cheating humans who cheat and constantly apply their pattern-recognition powers to nudge just about everything going on.
So are there better people than Chapman talking about this stuff, or is there an argument why this is an uninteresting question for human organizations despite it being recognized as a central problem in AI research with things like the Moravec paradox?
For the avoidance of doubt, that’s an enumeration of my suspicions
Those suspicions are fair. I agree that Chapman does a poor job of ruling out your second suspicion (perhaps because he’s not completely innocent there), and that it takes away from his message quite a bit. I wish he’d recognize this and do a better job here.
But one thing I am fairly sure is not true is that LW-rationalists as such haven’t noticed, or that LW-rationality as such doesn’t acknowledge, such elementary observations as “effective reasoning involves working out how to solve problems and not just learning stereotyped ways to solve specific preordained problems”
There are two different things going on here. One is that (at least a sizable minority of) engineering professors definitely do lack not only those distinctions, but the ability to see those distinctions when slapped in the face with strong evidence that they’re missing something. It would probably boggle your mind, as it did mine at the time. You can argue that LW is generally above that and therefore doesn’t need Chapman, but that is a very different thing from denying or failing to recognize the existence and importance of these phenomena in what are normally thought of as “smart rational people”.
The second is that it isn’t as simple as “Oh, I recognize that” or “I can’t see it yet”. It’s also possible to recognize it in the abstract, but fail to connect all the dots in practice, and therefore think you have it all figured out when there is much to learn. For example, how many times have you seen someone claim “Science has shown Y” and treat Y as if it were “Scientifically verified” itself when in fact science only verified X which plausibly but no means certainly implies Y? How many of those people would say anything but “Duh.” if you remind them that the scientific result is distinct from their interpretation of the result, and that it’s possible in theory for the result to be right and their conclusion wrong? In my experience, and I expect yours to be similar, a large majority of people are simultaneously aware of the possibility in the abstract and yet conflate the two without awareness even when the two things aren’t that close.
Rather than interpreting it as “None of y’all are even aware of this obvious thing!”, I’d interpret it more as “This is deserves more attention, because the subtleties are a bit tricky and it’s a more important distinction than many of you realize”. It may still be false, as applied to you or LW, but it’s a very different claim and much more reasonable.
“Expect” means two different things:
Heh, I understand that perspective. It’s convincing, but ultimately false. The frame that “I know that X will happen, and I’m just saying it shouldn’t” falls apart when you look at it closely and stop allowing “should” to function as a semantic stop sign. I distinctly remember the conversation where I was insisting that, and where my perspective was torn apart piece by piece by someone who had been down those roads further than I.
Once you start digging into the “why?” behind your expectation-2s failing to be realized, and acknowledging the truths as you see them, some interesting things start happening. You can only want for things that you are holding room for as “possible” in a sense (though it can certainly seem otherwise!), and so once you recognize why it’s not actually possible for things to have gone differently, your “wants” change to match. Your “expectation-2s” shift to things which are likely to actually become realized and effectiveness goes way the fuck up—as you might expect from taking out relevant inaccuracies from the map you’re using to navigate.
This also works for things like “irrational fear” and even things like physical pain, where it seems even more convincing that “pain is nerve sensations, not false expectation-1s!”. It’s by no means trivial and I don’t really expect you to believe me, but this is something I routinely do myself and have walked others through many times (after being walked through it myself until I started to grok it).
has Chapman in fact presented us with (1) any evidence that techniques he understands and we muggles don’t (but, under his tutelage, maybe could)
Note that this is very much a status focused framing, and that such framings are fraught with problems. “I don’t want to give this person status that I’m not convinced they deserve” brings high risks of bad thinking.
And if I started writing such a book, wrote all the bits that make those promises, put it on the internet, and somehow never got around to writing the bits that actually teach the reader to do magic, then I think I would deserve to face a fair bit of skepticism.
I’d expect to as well, and skepticism would be fair, but “deserve” is a funny word with all those buried assumptions that start to fall apart when you look closely. I don’t think Chapman has ever shown signs of expecting (in either sense) that this skepticism not happen, or that some people won’t be of the opinion that he “deserves” it. Nor have I seen anyone else suggesting that this skepticism isn’t reasonable.
It seems rather unfair to accuse me of allowing “should” to function as a semantic stop sign, when the very first thing I did after writing “should” was to go into more detail about what “should” might mean in more concrete terms.
My “status-focused framing” is simply making more explicit a status-focused framing that I think is already there when people talk about “meta-rationality”, “post-rationalism”, and the like. I agree that turning intellectual discussions into status fights is harmful, and my intention was to draw attention to the fact that that’s a thing that Chapman and others are doing.
It seems rather unfair to accuse me of allowing “should” to function as a semantic stop sign, when the very first thing I did after writing “should” was to go into more detail about what “should” might mean in more concrete terms.
I’m not accusing you; it’s not a “you” thing. Everyone uses the word “should” as a semantic stop sign, myself included, because that’s fundamentally what the word is. It’s not a problem so long as you’re aware of the trade offs involved, and clearly you’re aware that there is a failure mode to be avoided. However, as I said last time, it’s not as simple as “perfectly aware” vs “totally unaware”. The more subtle parts are not at all obvious, and plenty of very smart people who I have a lot of respect for still miss them. Missing the full extent of it is the default.
My “status-focused framing” is simply making more explicit a status-focused framing that I think is already there when people talk about “meta-rationality”, “post-rationalism”, and the like. I agree that turning intellectual discussions into status fights is harmful, and my intention was to draw attention to the fact that that’s a thing that Chapman and others are doing.
Yes, I am aware that this is what your intention is, and that from your perspective it looks like the status nonsense is already there because of the way he is going about things. Still, “drawing attention to [my accusation that] this is what the other guy is doing” doesn’t get you out of status issues. It focuses your attention on “How much status does this guy deserve, and do I really have to accept a low status position as a ‘muggle’ that needs him to ‘enlighten’ me”, and in doing so distracts from and obscures “Is there something true and useful that he is attempting to communicate?”—which is is the right way to determine who gets the “status” benefits of being listened to anyway.
Coincidentally, just this morning I wrapped up a conversation with a friend that very much relates to what we’re talking about here. It took me a while to convey to her how expect-2 is actually just expect-1 in disguise, but in the end it changed her perspective from one where achieving the “should” was out of the question in her mind to the one which mirrors times she’s been successful with similar things in the past and she can anticipate success—and in a way that I think fits Chapman’s concept of a failure of rationality and success with metarationality (at least, if I’m understanding Chapman correctly). Also tied in there is a bit about my own struggles with pretty much the same problem, and how I only got out of it by interpreting my own (correct!) thoughts of “This other guy is doing status bullshit” by treating it as a warning sign of my own imperfections there, and setting aside the semantic stop-signs I had been resting on.
It’s a little difficult to explain, but if you’re interested in hearing a real world example I’ll try to see if I can spell it out in a way that makes sense.
I’m not accusing you; it’s not a “you” thing. Everyone uses the word “should” as a semantic stop sign, myself included, because that’s fundamentally what the word is.
What does it mean for a word to fundamentally be a semantic stop sign?
As gjm noted, after using the word he elaborated on it:
you expect-2 X when you think that X should happen (more precisely, that some person/group/institution should make it happen; more precisely, that the world will be a better place according to your values or theirs if they do).
If I were to say “Bill Gates should put some of his money into AI safety”, I take it you think “should” is being a semantic stop sign. If I say “the world would be a better place according to my values if Bill Gates were to put some of his money into AI safety”, is there still a semantic stop sign there? Do you claim that’s not what I meant by “should”?
A word is fundamentally a semantic stopsign when the whole purpose of the word is to cover up detail and allow you to communicate without having to address what’s underneath.
As I mentioned before, this isn’t always a problem. If someone says “I just realized I should pee before we leave”, and then goes and pees, then there really isn’t an issue there. We can still look more closely, if we want, but we aren’t going to find anything interesting that changes the moral of the story. They realized that if they don’t pee before leaving, they will end up with a full bladder and no convenient way to empty it. Does it mean they would otherwise have to waste time pulling over? That they won’t have a chance and will pee themselves? It doesn’t really matter, because it is sufficiently clear that it’s in everyone’s best interest to let the guy go pee before embarking on the road trip with him. The right answer is overconstrained here.
Similarly, “Bill should donate” can be unproblematic—if the details being glossed over don’t change the story. Sometimes they do.
If you say “Bill gates should!” and then say “Well, what I really mean by that is… that I would like it, personally, because it would make the world better according to my values”, then that changes things drastically. “Should!” has a moral imperative that “I’d personally like...” simply does not—unless it somehow really matters what you’d like. Once you get rid of “should” you have to expose the driving force behind any imperative you want to make. Is it that you just want his money for yourself? Do you have a well thought out moral argument that Bill should find compelling? If so, what is it, and why should Bill find it compelling?
Very frequently, people will run out of justification before their point becomes compelling. I have a friend, for example, who thinks “Health care ‘should’ be ‘free’”, and who gets quite grumpy if you point out her lack of an actual argument. Fundamentally, what she means is “I want healthcare, and I don’t want to pay for it”, but saying it that way would make it way too obvious that she doesn’t actually have a compelling reason why anyone should want to pay for her healthcare—so she sticks with “it should be free”. This isn’t a political statement, btw, since I’m not saying that good arguments don’t exist, or that “health care ‘shouldn’t’ be ‘free’”. It’s just that she wants the world to be a certain way that would be convenient to her, and the way things currently are violate a “fairness” intuition she has, so she’s upset about it without really understanding what to do about it. She doesn’t see any reason that everyone else would feel compelling, and so she moralizes in the hopes that the justification is either intuitively obvious to everyone else, or else that people will care that she feels that way.
And that’s an empirical prediction. If you say “You should do X” and “expect-2“ at them to do it, and they do, then clearly your moralizing had sufficient force and you were right to think you could stop at that level of detailed support. If you start expecting at your dog to sit, and you’ve never taught it to sit, then there’s just no way to fill in the details behind “the dog should sit” which make any sense. “The world would be better if it sat”—sure, let’s grant that. What do you think you’re accomplishing by announcing this fact instead of teaching the dog to sit? Notice how that statement is oddly out of place? Notice how the “should” and “expectation-2” kinda deflate once you recognize that the expectation will necessarily be falsified?
Returning to the “irrational fear” example, the statement is “I shouldn’t be afraid”. If you follow that up with a lack of fear, then fine. Otherwise, you have a contradiction. Get rid of the “should”, and see what happens. “I shouldn’t be afraid” → “I feel fear even though there is no danger”. Oh, there’s no danger? Now that you’re making this claim explicitly, how do you know? How come it looks like you think you’re going to splat your head on the concrete below? What does your brain anticipate happening, and why is it wrong? Have you checked to make sure the concrete anchors have been set properly? Are you sure you aren’t missing something that could lead to your head splatting on the concrete below?
When you can confidently answer “Yes, I have checked the knots, I have checked the anchors, and there is no way my head will splat on the concrete below. My brain was anticipating falling without any actual cause, and I can see now that there are no paths to that outcome”, then how do you maintain the fear? What are you afraid of, if that can’t happen? It’s like trying to say “I don’t believe it’s raining, but it is raining”. Even if you feel the same “fear” sensations, once you’re confident that they don’t mean anything it just becomes “I feel these sensations which don’t mean anything”. Okay, so what? If you’re sure they don’t mean anything then go climb. We call that “excitement”, btw.
When you “should” or “expect” at a thing, and your expectations are being falsified rather than validated, then that’s the cue to look deeper. It means that the stuff being buried underneath isn’t working out like you think it should, so you’re probably wrong about something. If you’re trying and failing to rock climb without fear, it probably means you were flinching away from actually addressing the dangers and that you need to check your knots, check whether you’ve done enough checking, and then once you do that you will find yourself climbing without being burdened by fear. If you’re trying to say that someone should do something you want them to do and they aren’t doing it, it probably means you have a gap in your model about why they would care or else how they would know—and once you figure that out you’ll find yourself happily explaining more, or creating a reason for them to care, or realizing that your emotions had you acting out of line—depending on the case at hand.
That sorta make sense? I know it’s a bit far from intuitive
It makes sense as, like, a discussion of “this is sometimes what’s going on when people use the word should”. I’m far from convinced that that’s always what’s going on, or that it’s what’s going on in this particular situation.
Like, I feel like you’re taking something that should be at the level of “this is a hypothesis to keep in mind” and elevating it to “this is what’s true”.
(Oh hey, I used “should”. What do I mean by that? I guess kind of the same as if I said “if you add this list of numbers, you should get zero”. That is, I feel like you’re making a mistake to hold this thing at a level of confidence different from what I think is the correct level of confidence. Was there more to my “should” than that? Quite possibly, but… I feel like you’re going to have a confident prediction about what that was? And I want to point out that while it’s possible you know better than me what’s going on inside my head, it’s not the default guess.)
I guess I also want to point out that the sequence of events here is, in part:
Richard says a thing.
TAG and then yourself use the word “expect” to suggest Richard was being unreasonable.
gjm uses the word “should” in a reply to your “expect” to suggest Richard was perhaps being reasonable after all.
Big discussion about the words “expect” and “should”.
Notably, Richard never used either of those words himself. So for example, you say “When you “should” or “expect” at a thing, and your expectations are being falsified rather than validated, then that’s the cue to look deeper.” Well, is Richard “should”ing or “expect”ing? I could believe he’s doing the thing just with different words, but I don’t think it’s been established that he is, and any discussion of “this is what these words mean” is completely besides the point.
Not that I have anything against long besides-the-point digressions, but I do think it’s good for everyone to be aware that’s what they are.
(Gonna limit myself to two more replies after this, and depending on motivation I might not even do that many.)
Like, I feel like you’re taking something that should be at the level of “this is a hypothesis to keep in mind” and elevating it to “this is what’s true”.[...]
The hypothesis “The word ‘should’ is being used to allow communication while motivatedly covering up detail that is necessary to address” is simply one hypothesis to keep in mind, and doesn’t apply to every use of the word “should”. However “The word still functions to allow communication without having to get into further detail” is just something that is always true. What would a counterexample even look like?
I’ve tried to be explicit in the last two comments that this isn’t always a bad thing. Your use of the word “should” here seems pretty reasonable to me. That doesn’t mean that there isn’t more detail being hidden (mistake according to what values?), just that we more or less expect that the remaining ambiguity isn’t likely to be important so stopping at this level of precision is appropriate.
I feel like I’m kinda saying the same thing as last time though. Am I missing what your objection is? Do you see why “semantic stopsign” shouldn’t be seen as a boo light?
That is, I feel like you’re making a mistake to hold this thing at a level of confidence different from what I think is the correct level of confidence.
This does highlight a potential failure mode though. Determining which level of confidence is “correct” for someone else requires you to actively know something about what they’ve seen and what they’d be able to see. It’s pretty hard to justify until you can see them failing to see something.
I feel like you’re going to have a confident prediction about what that was? And I want to point out that while it’s possible you know better than me what’s going on inside my head, it’s not the default guess.)
Not in this case, no. Because it’s a fairly reasonable use there’s no sign of failure, and therefore nothing to suggest what you might be doing without realizing you’re doing.
If your answer to 54+38 is 92, I don’t have any way of knowing how you got there other than it worked. Maybe you used a calculator, or maybe you had a lucky guess. If you say 82, then I can make an educated guess that you used the method they teach in elementary school and forgot to carry the one. If you get 2052, I can guess pretty confidently that you used a calculator and hit the “x” button instead of the “+” button.
“Knowing what’s going on inside someones head better than they do” just means recognizing failure modes that they don’t recognize.
Well, is Richard “should”ing or “expect”ing? I could believe he’s doing the thing just with different words, but I don’t think it’s been established that he is, and any discussion of “this is what these words mean” is completely besides the point.
Richard is not on trial. He didn’t do anything anti-social that calls for such a trial. It would be presumptuous to speak for him, and inappropriately hostile to accuse him. I’m uncomfortable with the implication that this is what the “point” is, and perhaps should have disclaimed this explicitly early on. Heck, his main point isn’t even wrong or unreasonable.
It’s just that the fact that he felt it to be necessary to say suggests something about his unspoken expectations—because Chapman’s/TAG’s expectations wouldn’t have led to that comment feeling relevant even though it’s still fairly true. Productive conversation requires addressing the actual disagreement rather than talking past each other, and when those disagreements are buried in the underlying expectations, this means pointing the conversation there. That’s why TAG basically asked “What do you expect?”, and why it was the correct thing to do given the signs of expectation mismatches. Gjm responded to this by saying that “expect doesn’t always mean expect”, which is an extremely common misconception, and understanding why that is wrong is important—not just for the ability to have these kinds of discussions productively, but also that.
However “The word still functions to allow communication without having to get into further detail” is just something that is always true. What would a counterexample even look like?
Hm. So I thought you were referring to a word like “cult” or “fascist”, where part of what’s going on is mixing normative and descriptive claims in a way that obfucates. But now it seems you might have meant a word like “tree” or “lawyer” or “walk”; that is, that just about every word is a semantic stopsign?
And then when you say “stop allowing “should” to function as a semantic stop sign”, you mean “dig below the word “should” to its referent”, much as I might do to explain the words “tree” or “lawyer” or “walk” to someone unfamiliar with them?
But as gjm and I have both noted, he did that. Like, it sounds like that part of the conversation went: “you should do X” / “I did X” / “this isn’t about you, no one does X”.
I confess I do not understand this.
(It’s not that I thought you’d necessarily see my “should” above as bad. I just expected you’d think there was more going on with it than I thought was going on with it.)
mistake according to what values?
Not sure if this part is particularly relevant given my apparent misunderstanding of “semantic stopsign”. But to be clear, I meant a factual mistake.
“Knowing what’s going on inside someones head better than they do” just means recognizing failure modes that they don’t recognize.
Well, yes, and like I said it’s possible. And in the example you gave, I agree those would be good guesses.
But also, it seems common to me that someone will think they recognize someone a failure mode in someone else, that that person doesn’t recognize; and that they’ll be wrong. Things like, “oh, the reason you support X is because you haven’t read Y”. Or “I would have liked that film if I hadn’t noticed the subtext; so when you say you liked that film, you must not have noticed the subtext; so when I point out the subtext you will stop liking the film”.
Something you said pattern matches very strongly to this, for me: “Heh, I understand that perspective. … I distinctly remember the conversation where I was insisting that...”.
It’s entirely possible that your framing there is priming me to read things into your words that you aren’t putting there yourself, and if I’m doing that then I apologize.
Gjm responded to this by saying that “expect doesn’t always mean expect”, which is an extremely common misconception
Well, honestly I’m still not convinced this is wrong. I had thought that getting into “should” might help clear this up, but it hasn’t, so.
It just doesn’t seem at all problematic to me that “I expect you to show up on time from today on” and “I expect the sun to rise tomorrow” are using two different senses of “expect”. It sounds like you think something like… if the speaker of the first one meditates on wanting, they will either anticipate that the listener will show up on time, or they will stop caring whether the listener shows up on time? I’m guessing that’s not a good description of what you think, but that’s what I’m getting.
Now to be clear, I wouldn’t describe this “expect” the same way gjm described “expect-2″. He made the distinction: “you expect-1 X when you think X will probably happen; you expect-2 X when you think that X should happen”. And I think I’d change expect-2 to something like “when you think that X should happen, and are low-key exercising some authority in service of causing X to happen”. Like “I expect you to show up on time” sounds to me like an order backed by corporate hierarchy, and “England expects every man will do his duty” sounds like an order backed by the authority of the crown. “I expect open source developers to be prompt at responding to bug reports” sounds like it’s exercising moral authority. And if we make this distinction, then it does not seem to me like Richard was expect-2ing anything of Chapman.
But that doesn’t seem particularly relevant, because:
understanding why that is wrong is important
So I agree this seems like the sort of thing that’s important-in-general to know about. If the word “expect” has only one common meaning, then I certainly want to know that; and if it has two, then I expect you want to know that.
But it still doesn’t seem like it matters in this specific case. This conversation stems from hypothesizing about what’s going on inside Richard’s head, and he didn’t use the word in question. So like,
“Richard is expecting ___” / “That seems like a fine thing for him to do, because “expect” can also mean ___” / “No it can’t, because...”
It seems like the obvious thing to do here, if we’re going to hypothesize about what’s going on in Richard’s head, is to just stop using the word “expect”? Going into “what does “expect” mean” seems like the opposite of productive disagreement.
For sake of brevity I’m going to respond just to the parts I see as more likely to be fruitful, but feel free to demand a response to anything I skip over and I’ll give one.
Hm. So I thought you were referring to a word like “cult” or “fascist”, where part of what’s going on is mixing normative and descriptive claims in a way that obfucates. But now it seems you might have meant a word like “tree” or “lawyer” or “walk”; that is, that just about every word is a semantic stopsign?
Yes, closer to the latter. There’s always more underneath, even with words like “tree”.
However, the word “should” is a bit different, in ways we touch on below.
(It’s not that I thought you’d necessarily see my “should” above as bad. I just expected you’d think there was more going on with it than I thought was going on with it.)
Hm, okay.
And maybe there is. “Factual mistake” isn’t perfectly defined either. We could get further into the ambiguities there, but it’s all going to feel “yeah but that doesn’t matter” because it doesn’t. It’s defined well enough for our purposes here.
Well, yes, and like I said it’s possible. [...] But also, it seems common to me that someone will think they recognize someone a failure mode in someone else, that that person doesn’t recognize; and that they’ll be wrong. Things like, “oh, the reason you support X is because you haven’t read Y”.
The important distinction to track here is whether the person is closing the loop or just saying whatever first comes to mind with no accountability. When the prediction fails, is there surprise and an update? Or do the goalposts keep moving and moving? The latter is obviously common and almost always leads to wrongness, but that isn’t a mark on the former which actually works pretty well.
Something you said pattern matches very strongly to this, for me: “Heh, I understand that perspective. … I distinctly remember the conversation where I was insisting that...”.
I can see what you mean, but it’s pretty different. Fixed goal posts, consistent experience of observing minds changing when they reach them, known base rates, calibrating to the specifics of what is and isn’t said, etc. I can explain if you want.
It’s entirely possible that your framing there is priming me to read things into your words that you aren’t putting there yourself, and if I’m doing that then I apologize.
No worries. I can tell you’re working in entirely good faith here. I’m not confident that I can convey what I’d like to convey in the amount of effort you’re willing to put into this conversation, but if I can’t it’s definitely because I’ve failed to cross the inferential distance and not because your mind isn’t open.
Well, honestly I’m still not convinced this is wrong. I had thought that getting into “should” might help clear this up, but it hasn’t, so.
“Convinced” is a high bar, and we have spent relatively few words on the topic. Really grokking this stuff requires “doing” rather than just “talking about”. Meaning, actually playing one or both sides of “attempting to hold onto the frame where a failing ‘should’/‘expect-2’ is logically consistent and not indicative of wrongness” and “systematically tearing that frame apart by following the signs to the missing truth”. And then doing it over and over over a wide range of things and into increasingly counterintuitive areas, until the idea that “Maybe this time it’ll be different!” stops feeling realistic and starts feeling like a joke. Working through each example usually takes an hour or two of back and forth until all the objections are defeated and the end result recognized as inevitable rather than merely “plausible”.
I’d count it a success if you walk away skeptical, but with a recognition that you can’t rule it out either, and a good enough sketch that you can start filling in the details.
It just doesn’t seem at all problematic to me that “I expect you to show up on time from today on” and “I expect the sun to rise tomorrow” are using two different senses of “expect”. It sounds like you think something like… if the speaker of the first one meditates on wanting, they will either anticipate that the listener will show up on time, or they will stop caring whether the listener shows up on time? I’m guessing that’s not a good description of what you think, but that’s what I’m getting.
Yes! Not quite, but close!
So yes, those two are different. And yes, “low-key exercising authority” is a key distinction to make here. However, it’s not the case that expecting your employee to show up on time is simultaneously “low-key exercising authority” AND “not a prediction”. It’s either still a prediction, or it’s not exercising authority. The mechanism of exercising authority is through predicting people will do as you direct them to, and if you lose that then you don’t actually have authority and are simply engaging in make believe.
This is a weird concept, but “intentions” and “expectations” are kinda the same thing related to differently. This is why your mom could tell you “You are going to start behaving right now!*” and you don’t get confused why she’s giving an order as if it’s a prediction. It’s why your coach in high school would say “You have to believe you can win!”, and why some kids really did choke under pressure and under-perform relative to what they were otherwise capable of. When it comes to predicting whether you’ll get a glass of water when you’re thirsty, you can trivially realize either prediction, so you solve this ambiguity by choosing to predict you’re going to get what you want and act so as to create that reality. If you want to start levitating a large object with your mind, you can’t imagine that working so it gets really hard to even intend to do it. That’s the whole “use the try harder Luke” stuff. When it gets hard to expect success, it gets hard to even try. (Scott’s writing on “predictive processing” touches on this equivalence)
If you’ve been trying to low-key authority at someone to show up on time, and then you start looking real closely at what you’re doing, one potential outcome is that you simply anticipate they’ll show up, yes. In this case, it’s like… think of the difference between “I expect myself to get up and run a mile today” when you really don’t wanna and you can feel the tension that exercising authority is creating and you’re not entirely sure it’ll keep working… and then compare that to what it feels like when “run a mile” is just what you do, like getting a glass of water when you’re thirsty, or brushing your teeth in the morning (hopefully). It may still suck, and you may not *like* running, but you notice your feet start walking out the door almost “on their own” because “not running” isn’t actually a thing anymore. Any tension there is the uncertainty you’re trying to deny in order to insist reality bend to your will, and when you look closely and find out that it’s definitely gonna happen, it goes away because you’re not uncertain anymore.
In the other extreme when you find that it’s definitely not gonna happen, you “stop caring” in the sense that you no longer get bothered by it, but not in the sense that you’d no longer appreciate the guy showing up on time, and not in the sense that you stop exerting optimization pressure in that direction. It actually frees you up to optimize a lot more in that direction, because you’re no longer navigating by a bad map, you no longer come off as passive aggressive or aggressive and lacking in empathy, and you’re not bound to expecting success before you do something. So for example, that recent case I referenced involved me feeling annoyed by an acquaintance’s condescending douchery. Once I looked at where I was going wrong (why he was the way he was, why my annoyance had no authority over him, etc), I no longer “cared” in the sense that his behavior didn’t annoy me anymore. But also, that lack of annoyance opened up room for me to challenge and tease him without it being perceived as (or being) an ego threat, and now I actually like the guy and rather than condescending to me he regularly asks for my input on things (even though his personality limitations are still there).
In the middle, you realize you don’t actually know whether or not they’re going to start showing up on time. Instead of asserting “I expect!” hoping for the best, you realize that you can’t “decide” what they do, but they can, so you ask: “Do you think you’re going to start coming in on time?”. And you wait for an answer. And you look to see what this means they will actually do. This feels very different from the other side. Instead of feeling like you’re being projected at, it feels like you’re being seen. You can’t just “say” you will, because your boss is no longer looking to see if you’ll prop up his semi-delusional fantasy a little longer; he’s looking to see what you will do. Instead of being pushed into a role where you grumble “Yes sir..” because you have no choice and having things happen to you that are out of your control, the weight of the decision is on your shoulders, and you feel it. Are you going to start showing up on time?
I’m not going to demand anything, especially when I don’t plan to reply again after this. (Or… okay, I said I’d limit myself to two more replies but I’m going to experiment with allowing myself short non-effortful ones if I’m able to make them. Like, if you want to ask questions that have simple answers I’m not going to rule out answering them. But I am still going to commit to not putting significant effort into further replies.)
But the thing that brought me into this conversation was the semantic stop sign thing. It still seems to me like that part of the conversation went “you should do X” / “I did X” / “this isn’t about you, no one does X”. And based on my current understanding of what you meant by “semantic stopsign”, I agree that gjm didn’t do X, and it feels like you’ve ignored both him and myself trying to point this out.
I expect there’s a charitable explanation for this, but I honestly don’t have one in mind.
I can see what you mean, but it’s pretty different. Fixed goal posts, consistent experience of observing minds changing when they reach them, known base rates, calibrating to the specifics of what is and isn’t said, etc. I can explain if you want.
Mm, I think I know what you mean, but… I don’t think I trust that you’re at that level?
So, okay, I have to remember here that this thread originally came up when I felt like you’d think you knew better than me what was in my own head, and then fair play to you, you didn’t. But then you defended the possibility of doing that, without having done it in that specific case. So this bit has to be caveated with a “to the extent that you actually did the thing”, which I kind of think you did a bit with gjm and Richard but I’m not super sure right now.
As I said, I agree it’s possible to know what’s going on in someone else’s mind better than they are. I agree that the things you say here make it more likely than otherwise.
But at best they’re epistemically illegible; you can’t share the evidence that makes you confident here, in a way that someone else can verify it. And it’s worse than that, because they’re the sort of thing I feel like people often self-delude about. Which is not to say you’re doing that; only that I don’t think I can or should rule out the possibility.
So these situations may seem very different to you, and you may be right. But as a reader, they look very similar, and I think I’m justified in reacting to them similarly.
There are ways to make me more disposed to believe you in situations like this, which I think roughly boil down to “make it clear that you have seen the skulls”. I’ve written another recent comment on that subject, though only parts of it are relevant here.
No good thing to quote here, but re expecting: I feel like you’re saying “these are the same” and then describing them being very different.
So sure, I expect-2 my employee to show up on time, and then I do this mental shift. Then either I expect-1 him to show up on time; or I realize I don’t expect-1 him to show up on time, and then I can deal with that.
And maybe this is a great mental shift to make. Actually I’d say I’m pretty bullish on it; this feels to me more like “oh, you mean that thing, yeah I like that thing” than like “oh, that’s a thing? Huh” or “what on earth does that mean?”
So I don’t think the point of friction here is about whether or not I understand the mental shift. I think the point of friction is, I have no idea why you described it as “there’s only one kind of expect”.
Like… even assuming I’ve made this mental shift, that’s not the same as just expect-1ing him to show up on time? This feels like telling me that “a coin showing heads” is the same as “a coin that I’ve just flipped but not yet looked at”, because once I look I’ll either have the first thing or I’ll be able to deal with not having it. Or that “a detailed rigorous proof” is the same as “a sketch proof that just needs filling out”, because once I fill out the details I’ll either have the first thing or I’ll be able to deal with the fact that my proof was mistaken.
And that’s from the perspective of the boss. Suppose I’m the employee and my boss says that to me. I can’t make the mental shift for her. It probably wouldn’t go down very well to ask “ah, but do you predict that I’ll show up on time? Because if you don’t, then you should come to terms with that and work with me to...”
Maybe if my boss did make this mental shift, then that would be good for me too. But given that she hasn’t, I kind of need to know: when she used the word “expect” there, was that expect-1 or expect-2? Telling me about a mental shift she could make in the way she relates to expectations seems unhelpful. Telling me that the two kinds of expectations are the same seems worse than useless.
I’m not going to demand anything, especially when I don’t plan to reply again after this.
“Demand” is just a playful way of saying it. Feel free to state that you think what I skipped over is important as well. Or not.
But the thing that brought me into this conversation was the semantic stop sign thing. It still seems to me like that part of the conversation went “you should do X” / “I did X” / “this isn’t about you, no one does X”. And based on my current understanding of what you meant by “semantic stopsign”, I agree that gjm didn’t do X, and it feels like you’ve ignored both him and myself trying to point this out.
I’m confused. I assume you meant to say that you agree with gjm that he *did* do X, and not that you agree with me that he didn’t?
Anyway, “You should do X”/”I did X”/”No one does X” isn’t an accurate summary. To start with, I didn’t say he *should* do anything, because I don’t think that’s true in any sort of unqualified way—and this is important because a description of effects of a type of action is not an accusation while the presupposition that he isn’t doing something he should be doing kinda is. Secondly, the thing I described the benefits of, which he accused me of accusing him of not doing, is not a thing I said “no one does”. Plenty of people do that on plenty of occasions. Everyone *also* declines to do it in other cases, and that is not a contradiction.
The actual line I said is this:
The frame that “I know that X will happen, and I’m just saying it shouldn’t” falls apart when you look at it closely and stop allowing “should” to function as a semantic stop sign
Did he “look closely” and “stop allowing ‘should’ to function as a semantic stop sign”? Here’s his line:
you expect-2 X when you think that X should happen (more precisely, that some person/group/institution should make it happen; more precisely, that the world will be a better place according to your values or theirs if they do).
He did take the first step. You could call it two, if you want to count “this specific person is the one who should make it happen” as a separate step, but it’s not a sequential step and not really relevant. “This should happen”->”the world would be better if it did” is the only bit involving the ‘should’, and that’s a single step.
Does that count as “looking closely”? I don’t see how it can. “Looking at all”, sure, but I didn’t say “Even the most cursory look possible will reveal..”. You have to look *closely*. AND you have to “stop allowing ‘should’ to function as a semantic stopsign”.
He did think “What do I mean by that?”, and gave a first level answer to the question. But he didn’t “stop using should as a stop sign”. He still used “should”, and “should” is still a stop sign. When you say “By ‘should’, I mean ____”, what you’re doing is describing the location of the stop sign. He may have moved it back a few yards, but it’s still there, as evidenced by the fact that he used “should” and then attributed meaning to it. When you stop using should as a stopsign, there’s no more should. As in “I don’t think Chapman ‘should’ do anything. The concept is incoherent”.
It’s like being told “This thing you’re in is an airplane. If you open the throttle wide, and you resist the temptation to close it, you will pick up speed and take off”, and then thinking you’ve falsified that because you opened the throttle for three seconds seconds and the plane didn’t take off.
I expect there’s a charitable explanation for this, but I honestly don’t have one in mind.
In general it’s better to avoid talking about specific things people have done which can be interpreted as “wrong” unless you have an active reason to believe that focus will actually stay on “is it true?” rather than “who loses status if it’s true”—or unless the thing is actually “wrong” in the sense that the behavior needs to be sanctioned. It’s not that things can’t get dragged there anyway if you’re talking about the abstract principles themselves, but at least there’s a better chance of focus staying on the principles where it should be.
I was kinda hoping that by saying “Takeoff distance is generally over a quarter mile, and many runways are miles long”, you’d recognize why the plane didn’t take off without needing to address it specifically.
So, okay, I have to remember here that this thread originally came up when I felt like you’d think you knew better than me what was in my own head, and then fair play to you, you didn’t. But then you defended the possibility of doing that, without having done it in that specific case. So this bit has to be caveated with a “to the extent that you actually did the thing”, which I kind of think you did a bit with gjm and Richard but I’m not super sure right now.
Well, I was pretty careful to not comment on what Richard and gjm were doing. I didn’t accuse gjm of anything, nor did I accuse Richard of anything. I see what TAG saw. I also saw gjm respond to my “self-predictably false expectation is a failure of rationality” in the way that someone would respond if they weren’t aware of any reason to believe that other than a lack of awareness of the perspective that claims “there’s two senses of the word ‘expect’” is a solution—and in a way that I can’t imagine anyone responding if they were aware of the very good reasons that can coexist with that awareness.
I think those pieces of evidence are significant enough that dismissing them as meaningless is a mistake, so I defended TAGs decision to highlight a potential problem and I chose to highlight another myself. Does it mean that they *were* doing the things that this interpretation of the evidence points towards? Not necessarily. I also didn’t assert anything of the sort. It’s up to the individual to figure out how likely they think that is.
If, despite not asserting these things, you think you know enough about what’s going on in my mind that you can tell both my confidence level and how my reasoning doesn’t justify it, then by all means lemme know :P
But at best they’re epistemically illegible; you can’t share the evidence that makes you confident here, in a way that someone else can verify it.
I mean, not *trivially*, yeah. Such is life.
And it’s worse than that, because they’re the sort of thing I feel like people often self-delude about. Which is not to say you’re doing that; only that I don’t think I can or should rule out the possibility.
For sure, it’s definitely a thing that can happen and you shouldn’t rule it out unless you can tell that it’s not that—and if you say you can’t tell it’s not that, I definitely believe you. However, “it’s just self delusion” does make testable predictions.
So for example, say I claim to be able to predict the winning lottery numbers but it’s really just willful delusion. If you say “Oh that’s amazing! What are tomorrows numbers?”, then I’m immediately put to the choice of 1) sticking my neck out, lying, and putting a definite expiration date on having my any BS taken seriously, 2) changing my story in “unlikely” ways that show me to be dodging this specific prediction without admitting to a general lack of predicting power (“Oh, it doesn’t work on March 10ths. Total coincidence, I know. Every other day though..”), or 3) clarifying that my claims are less bold than that (“I said I can predict *better than chance*, but it’s still only a ~0.1% success rate”), and getting out of having my claims deflated by deflating them myself.
By iterating these things, you can pretty quickly drive a wedge in that separates sincere people from the delusional—though clever sociopathic liars will be bucketed with the sincere until those expiration dates start arriving. It takes on order n days to bound their power to predicting at most 1/n, but delusion can be detected as fast as anticipations can be elicited.
But as a reader, they look very similar, and I think I’m justified in reacting to them similarly.
Well, you’re justified in being skeptical, for sure. But there’s an important difference between “Could be just self delusion, I dunno..” and “*Is* just self delusion”—and I think you’d agree that the correct response is different when you haven’t yet been able to rule out the possibility that it’s legit.
There are ways to make me more disposed to believe you in situations like this, which I think roughly boil down to “make it clear that you have seen the skulls”.
For sure, there are skulls everywhere. The traps get really subtle and insidious and getting comfortable and declaring oneself “safe” isn’t a thing you ever get to do. However, it sounds like the traps you’re talking about are the ones along the lines of “failing to even check whether you anticipate it being true before saying “Pshh, you’re just saying that because you haven’t read Guns Germs and Steel. Trust me bro, read it and you’ll believe me”″ -- and those just aren’t the traps that are gonna get ya if you’re trying at all.
My point though was that there are successes everywhere too. “Seeing someone’s mind do a thing that they themselves do not see” is very very common human behavior, even though it’s not foolproof. In fact, a *really good* way to find out what your own mind is doing is to look at how other people respond to you, and to try to figure out what it is they’re seeing. That’s how you find things that don’t fit your narrative.
I’ve written another recent comment on that subject, though only parts of it are relevant here.
I get your distaste for that kind of comment, and I agree that there’s ways Val could have put in more effort to make it easier to accept. At the same time, recoiling from such things is a warning sign, and “nuggets of wisdom from above” is the last thing you want to tax.
I still remember something Val said to me years ago that had a similar vibe. In the end, I don’t think he was right, but I do think he was picking up on something and I’m glad he was willing to share the hypothesis. Certainly some other nuggets have been worth the negligible cost of listening to them.
So I don’t think the point of friction here is about whether or not I understand the mental shift. I think the point of friction is, I have no idea why you described it as “there’s only one kind of expect”.
Because there’s only one kind of expect. There’s “expecting”, and there’s “failing to expect, while pretending to be expecting and definitely not failing”. These are two distinct things, yes. Yet only the former is actually expecting.
It can seem like “I expect-2, then I introspect and things change, and I come out of it with expect-1”. As if “expect-2″ is a tool that is distinct from expect-1 and sometimes the better tool for the job, but in this case you set the former down and picked up the latter. As if in *this case* you looked closer and thought “Oh wow, I guess I was mistaken! That’s a torx bolt not an allen bolt!”.
There’s *another* mental shift though, on the meta level, which starts to happen after you do this enough.
So you keep reaching for “expect-2”, and it kinda sorta works from time to time, but *every time* you look closer, you think “Ah, this is another one of those cases where an expect-2 isn’t the right tool!”. And so eventually you start to notice that it’s curiously consistent, but you think “Well, seeing a bunch of white swans doesn’t disprove the existence of black swans! I just haven’t found the right job for this tool yet!”—or rather “All the right jobs are coincidentally the ones I haven’t examined in much detail! Because they’re so obvious!”.
Eventually you start to notice that there’s a pattern to it. It’s not just “This context is completely different, the considerations that determine which tool to use are completely different, and what a coincidence! The answer still points the same way!”. It’s “Oh, I followed the same systematic path, and ended up with the same realization. I wonder if maybe there’s something fundamental going on here?”. Eventually you get to the point where you start to look at the path itself, and recognize that what you’re doing is exposing delusion, and the things which tell you what step to take next are indicators of delusion which you’ve been following. Eventually you notice that the whole “unique flavor” that *defined* “expect-2″ is actually the flavor of delusion which you’ve been seeking out and exposing. And that the active ingredient in there, which made it kinda work when it did, has been expect-1 this whole damn time. It’s not “a totally different medicine”. It’s the same medicine mixed with horseshit.
At some point it becomes a semantic debate because you can define a sequence of characters to mean anything—if you don’t care about it being useful or referring to the same thing others use it to refer to. You could define “expect-2” as “expect-1, mixed with horse shit, and seen by the person doing it as a valid and distinct thing which is not at all expect-1 mixed with horse shit”, but it won’t be the same thing others refer to when they say “expect-2”—because they’ll be referring to a valid and distinct thing which is not at all expect-1 mixed with horse shit (even though no such thing exists), and when asked to point at “expect-2″ they will point at a thing which is in fact a combination of expect-1 and horseshit.
Like… even assuming I’ve made this mental shift, that’s not the same as just expect-1ing him to show up on time? This feels like telling me that “a coin showing heads” is the same as “a coin that I’ve just flipped but not yet looked at”, because once I look I’ll either have the first thing or I’ll be able to deal with not having it.
Expectations will shift. To start with you have a fairly even allocation of expectation, and this allocation will shift to something much more lopsided depending on the evidence you see. However, it was never actually in a state of “Should be heads, dammit”. That wasn’t a “different kind of expectation, which can be wrong-1 without being wrong-2, and was 100% allocated to heads”. Your expectation 1 was split 50⁄50 between heads and tails, and you were swearing up and down that tails wasn’t a legitimate possibility because you didn’t want it to be. That is all there is, and all there ever was.
And that’s from the perspective of the boss. Suppose I’m the employee and my boss says that to me. I can’t make the mental shift for her. It probably wouldn’t go down very well to ask “ah, but do you predict that I’ll show up on time? Because if you don’t, then you should come to terms with that and work with me to...”
Maybe if my boss did make this mental shift, then that would be good for me too. But given that she hasn’t, I kind of need to know: when she used the word “expect” there, was that expect-1 or expect-2? Telling me about a mental shift she could make in the way she relates to expectations seems unhelpful. Telling me that the two kinds of expectations are the same seems worse than useless.
Ah, but look at what you’re doing! You’re talking about telling your boss what she “should” do! You’re talking about looking away from the fact that you know damn well what she means so that you can prop up this false expectation that your boss will “come to terms with that”! *Of course* that’s not going to work!
You want to go in the opposite direction. You want to understand *exactly* what she means: “I’m having trouble expecting you to do what I want. I’m a little bothered by that. Rather than admit this, I am going to try to take it out on you if you don’t make my life easier by validating my expectations”. You want to not get hung up at the stage of “Ugh, I don’t want to have to deal with that”/”She shouldn’t do that, and I should tell her so!”, and instead do the work of updating your own maps until you no longer harbor known-false expectations and attach desires to possibilities which aren’t real.
When you’ve done that, you won’t think to say “You should come to terms with that” to your boss, even if everyone would be better off if she did, because doing so will sound obviously stupid instead of sounding like something that “should” work. What you choose to say still depends on what you end up seeing but whatever it is will feel *different* -- and quite different on the other side too.
Imagine you’re the boss putting on your serious face and telling an employee that you expect them to show up on time from now on. It’s certainly aggravating if they say “Ah, but do you mean that? You should work on that!”. But what if you put your serious face on, you say to them “Bob, I noticed that you’ve been late a couple times recently, and I expect you to be on time from now on”, and in response, Bob gives you a nice big warm smile and exclaims “I like your optimism!”.
It still calls out the same wishful thinking on the bosses part, but in a much more playful way that isn’t flinching from anything. Sufficiently shitty bosses can hissy fit about anything, but if you imagine how *you* would respond as a boss, I think you’d have a hard time not admitting to yourself “Okay, that’s actually kinda funny. He got me”, even if you try to hide it from the employee. I expect that you’d have a real hard time being mad if the employee followed up “I like your optimism!” with a sincere “I expect I will too.”. And I bet you’ll be a little more likely to pivot from “I expect!” towards something more like “It’s important that we’re on time here, can I trust that you won’t let me down?”.
(But one thing I am fairly sure is not true is that LW-rationalists as such haven’t noticed, or that LW-rationality as such doesn’t acknowledge, such elementary observations as “effective reasoning involves working out how to solve problems and not just learning stereotyped ways to solve specific preordained problems” and “things happen in contexts and you should pay attention to those” and “when solving a problem, you should also consider whether you should actually be solving a different problem” and “sometimes the problems you’re presented with are not very clearly defined”, and to whatever extent “meta-rationality” is supposed to be distinguished from What We Do Around Here by recognizing this sort of thing I think there are straw men being erected.
I think the opposite. If the majority of LessWrongians were following meta-rules like “neat models don’t always apply to messy problems” or “you have to remember that every real computer and every real human is finite” or “mathematically true doesn’t imply real-world true”, then they would reject Aumann’s theorem AND Solomonoff induction AND Bayes as they are understood and promoted here. But clearly the majority don’t reject all three. (And they may well be applying meta-rationality correctly to areas that are less tribally totemic).
That isn’t a maximally straightforward answer, because you have to start by pointing out that the framing of the question is wrong: Cambridge university is not in some specific location within Cambridge. Reframing is a sticking point for some people … if you haven’t answered the question as stated, you haven’t answered it, in their view.
Having got past that, you can only give examples.Here’s one
gjm’s two example answers — one that is useful in the context of the tourist’s question and one that is useless — illustrate the situation excellently.
Beginning by telling the questioner that their question is wrong is not useful, because away is not a direction. There’s no point telling them to “get out of the car” and refusing to engage further until they do. That will only stroke one’s own smugness.
Begin instead with what is true. Go on with more things that are true.
But Cambridge university is away from any point in the middle of Cambridge … so that is true. And pointing at one particular college is useful. There’s no useful way to point in every direction simultaniously.
You can’t just tell the truth relentlessly. You can only speak or write words, which are subject to the readers or listeners interpretation
If someone doesn’t understand what you are saying, then you can either go up to a meta level or give up.
When someone asks me, “What is X?”, I automatically rephrase the question as “Tell me what you know about X that is relevant and significant in the present context.” Up, down, sideways is not the point. I tell them whatever I can that seems to me that they do not know and need to know, and steer according to their response.
Were you expecting one weird trick?
I’m hoping for at least something, but it never arrives.
A tourist in Cambridge,UK, once asked me where the university was. (full disclosure: I was a tourist too).
I agree with Richard: there are perfectly good answers to this, which do in fact involve providing “at least something” concrete. For instance, someone helpful and fairly verbose might say:
“It’s a bit complicated, because the university has colleges and academic departments and they’re spread out all over the city. But from here I can show you some things you might be interested in. Over there is the Senate House: that’s where the governing body of the university has its meetings. On that side you can see King’s College—you might recognize its chapel—and over on the other side is a less famous college, Gonville & Caius, which you probably haven’t heard of but it’s where Stephen Hawking was a fellow. The big church behind us isn’t part of the university but it is associated with the university, and some of the regulations students have to obey say things like they have to be within 5 miles of this church for so many days per year. The academic departments—things like history, mathematics and so on—generally don’t live in beautiful historic buildings, and in any case you can’t see any of them from here, but if you want to see one I think the nearest to here is if you go along the street, past King’s and St Catharine’s colleges and what used to the the Cambridge University Press, and turn right down Silver Street: on your right just before the river is the Department of Sociology, which used to be Pure Mathematics. It’s nothing much to look at, though. If you want to know where everything is, I can show you a lot of it on a map, but there are bits of university all over the city, especially in the centre. Or if you just want to see some of the highlights, if you walk the length of this street starting at the far end that way, you’ll see a bunch of the most famous colleges: St John’s, Trinity, Gonville & Caius, King’s, St Catharine’s, Pembroke, and Peterhouse. Trinity is the biggest and richest. Peterhouse is the smallest and oldest.”
The sort of answer Richard is complaining about would go more like this:
“Well, the University is not the same thing as one of its colleges, or the same thing as one of its departments. Indeed, the university is not the same thing as all of its colleges or all of its departments. You might say that the university is the totality of all the teaching and research it does, but that isn’t really it either. The university is all around you, but if you aren’t part of it you probably can’t see it. Trinity College has more Nobel prizewinners than any of the others, but that doesn’t mean it’s where the university really is, and people at Trinity actually have rather a reputation for thinking the world revolves around them. The University of Cambridge is one of the world’s greatest academic institutions.”
… which isn’t wrong and points out some things that the other very concrete sort of answer ignores or glosses over—e.g., what a university actually is—but doesn’t do anything to answer the question the tourist is trying to ask.
It’s reasonable to want such a thing, but David is quite explicit about the fact that he hasn’t (yet) given Richard what he wants.
He is also explicit about why it’s not as straight forward as one might think to give that type of answer.
If you think it’s not actually that hard, then you can try giving a better answer yourself. If you think his intended audience already knows “what a university is” or else doesn’t need to know before usefully parsing an answer to “where” that isn’t simply a location, then you can make those arguments too. There are definitely ways to make criticisms that address what Chapman is saying about what he’s not saying and why.
When the response is “He only did [the thing he said he was doing]”, and it is framed as criticism rather than “as duh, why am I even saying this”, then it does call for reevaluating the expectations themselves. If the expectations were accurate there’d be no complaints, so they’re clearly not good expectations. And they didn’t come from Chapman, who explicitly disclaimed them in this post, so it’s not like it’s any evidence against what he’s saying. At that point, “What do you expect, and what makes you think Chapman not meeting your expectations is a problem with Chapman rather than a problem with your expectations?” is an entirely appropriate place to direct attention.
I have two suspicions and it’s difficult to distinguish between them.
There’s less to meta-rationality than meets the eye, because the insights, abilities, etc. that it actually provides are not in fact new but are things that many competent rationalists are already deploying.
There’s less to meta-rationality than meets the eye, because actually “there’s no there there” at all: all “meta-rationality” is is a habit of looking down on rationalists.
For the avoidance of doubt, that’s an enumeration of my suspicions and I am not intending to rule out a third possibility, that
Meta-rationality really is a thing, its practitioners really are more insightful, more effective, etc., than anyone who practises rationality and doesn’t explicitly think in terms of meta-rationality (or at least more effective than those people would be if they didn’t), and either it’s just really difficult to explain clearly or else its proponents prefer not to for some reason.
I suspect that actually there are elements of all three. At any rate, I neither know nor profess to know exactly what combination of them may be in play, which means I’m not in a position to “give a better answer [my]self”.
(But one thing I am fairly sure is not true is that LW-rationalists as such haven’t noticed, or that LW-rationality as such doesn’t acknowledge, such elementary observations as “effective reasoning involves working out how to solve problems and not just learning stereotyped ways to solve specific preordained problems” and “things happen in contexts and you should pay attention to those” and “when solving a problem, you should also consider whether you should actually be solving a different problem” and “sometimes the problems you’re presented with are not very clearly defined”, and to whatever extent “meta-rationality” is supposed to be distinguished from What We Do Around Here by recognizing this sort of thing I think there are straw men being erected.)
“Expect” means two different things: you expect-1 X when you think X will probably happen; you expect-2 X when you think that X should happen (more precisely, that some person/group/institution should make it happen; more precisely, that the world will be a better place according to your values or theirs if they do).
If someone says “I am not going to give you clear answers about this” and proceeds not to give clear answers, then for sure you shouldn’t expect-1 that they will give you clear answers. But you could still think that they should; you could still think that if they don’t then what they say isn’t very useful, or that if they don’t it indicates that they’re not being honest somehow.
Consider the opening of Chapman’s Eggplant. Chapman suggests, though he doesn’t quite claim explicitly, that the techniques he’s going to be trying to teach are what distinguishes the people whose extraordinary effectiveness in technical fields looks like magic; what enables them to do things that seem “exciting, magic, an incomprehensible breakthrough”. He says that up to now this sort of ability has had to be learned “through apprenticeship and experience” … but that “this book is the first practical introduction”.
I think it is reasonable to ask the question: has Chapman in fact presented us with (1) any evidence that techniques he understands and we muggles don’t (but, under his tutelage, maybe could) could in fact elevate us to that level if we aren’t there already, or (2) an actual “practical introduction” that will enable (more than a minuscule fraction of) us to do such things? And I think it’s clear that the answer so far is no.
Now, of course there’s nothing wrong with not having finished something yet. But if I were writing a book that promised to teach its readers to do magic, and it contained as yet no information about how to do magic and no evidence that they will ever be able to do magic, I would put prominent disclaimers and warnings to that effect right beside the bit where it makes those promises. And if I started writing such a book, wrote all the bits that make those promises, put it on the internet, and somehow never got around to writing the bits that actually teach the reader to do magic, then I think I would deserve to face a fair bit of skepticism.
You seem to frame this as either there being advanced secret techniques, or it just being a matter of common sense and wisdom and as good as useless. Maybe there’s some initial value in just trying to name things more precisely though, and painting a target of “we don’t understand this region that has a name now nearly as well as we’d like” on them. Chapman is a former AI programmer from the 1980s, and my reading of him is that he’s basically been trying to map the poorly understood half of human rationality whose difficulty blindsided the 20th century AI programmers.
And very smart and educated people were blindsided when they got around to trying to build the first AIs. This wasn’t a question of charlatans or people lacking common sense. People really didn’t seem to break rationality apart into the rule-following (“solve this quadratic equation”) and pattern-recognition (“is that a dog?”) parts, because up until the 1940s all rule-based organizations were run solely by cheating humans who cheat and constantly apply their pattern-recognition powers to nudge just about everything going on.
So are there better people than Chapman talking about this stuff, or is there an argument why this is an uninteresting question for human organizations despite it being recognized as a central problem in AI research with things like the Moravec paradox?
Those suspicions are fair. I agree that Chapman does a poor job of ruling out your second suspicion (perhaps because he’s not completely innocent there), and that it takes away from his message quite a bit. I wish he’d recognize this and do a better job here.
There are two different things going on here. One is that (at least a sizable minority of) engineering professors definitely do lack not only those distinctions, but the ability to see those distinctions when slapped in the face with strong evidence that they’re missing something. It would probably boggle your mind, as it did mine at the time. You can argue that LW is generally above that and therefore doesn’t need Chapman, but that is a very different thing from denying or failing to recognize the existence and importance of these phenomena in what are normally thought of as “smart rational people”.
The second is that it isn’t as simple as “Oh, I recognize that” or “I can’t see it yet”. It’s also possible to recognize it in the abstract, but fail to connect all the dots in practice, and therefore think you have it all figured out when there is much to learn. For example, how many times have you seen someone claim “Science has shown Y” and treat Y as if it were “Scientifically verified” itself when in fact science only verified X which plausibly but no means certainly implies Y? How many of those people would say anything but “Duh.” if you remind them that the scientific result is distinct from their interpretation of the result, and that it’s possible in theory for the result to be right and their conclusion wrong? In my experience, and I expect yours to be similar, a large majority of people are simultaneously aware of the possibility in the abstract and yet conflate the two without awareness even when the two things aren’t that close.
Rather than interpreting it as “None of y’all are even aware of this obvious thing!”, I’d interpret it more as “This is deserves more attention, because the subtleties are a bit tricky and it’s a more important distinction than many of you realize”. It may still be false, as applied to you or LW, but it’s a very different claim and much more reasonable.
Heh, I understand that perspective. It’s convincing, but ultimately false. The frame that “I know that X will happen, and I’m just saying it shouldn’t” falls apart when you look at it closely and stop allowing “should” to function as a semantic stop sign. I distinctly remember the conversation where I was insisting that, and where my perspective was torn apart piece by piece by someone who had been down those roads further than I.
Once you start digging into the “why?” behind your expectation-2s failing to be realized, and acknowledging the truths as you see them, some interesting things start happening. You can only want for things that you are holding room for as “possible” in a sense (though it can certainly seem otherwise!), and so once you recognize why it’s not actually possible for things to have gone differently, your “wants” change to match. Your “expectation-2s” shift to things which are likely to actually become realized and effectiveness goes way the fuck up—as you might expect from taking out relevant inaccuracies from the map you’re using to navigate.
This also works for things like “irrational fear” and even things like physical pain, where it seems even more convincing that “pain is nerve sensations, not false expectation-1s!”. It’s by no means trivial and I don’t really expect you to believe me, but this is something I routinely do myself and have walked others through many times (after being walked through it myself until I started to grok it).
Note that this is very much a status focused framing, and that such framings are fraught with problems. “I don’t want to give this person status that I’m not convinced they deserve” brings high risks of bad thinking.
I’d expect to as well, and skepticism would be fair, but “deserve” is a funny word with all those buried assumptions that start to fall apart when you look closely. I don’t think Chapman has ever shown signs of expecting (in either sense) that this skepticism not happen, or that some people won’t be of the opinion that he “deserves” it. Nor have I seen anyone else suggesting that this skepticism isn’t reasonable.
That’s just not what this is about.
It seems rather unfair to accuse me of allowing “should” to function as a semantic stop sign, when the very first thing I did after writing “should” was to go into more detail about what “should” might mean in more concrete terms.
My “status-focused framing” is simply making more explicit a status-focused framing that I think is already there when people talk about “meta-rationality”, “post-rationalism”, and the like. I agree that turning intellectual discussions into status fights is harmful, and my intention was to draw attention to the fact that that’s a thing that Chapman and others are doing.
I’m not accusing you; it’s not a “you” thing. Everyone uses the word “should” as a semantic stop sign, myself included, because that’s fundamentally what the word is. It’s not a problem so long as you’re aware of the trade offs involved, and clearly you’re aware that there is a failure mode to be avoided. However, as I said last time, it’s not as simple as “perfectly aware” vs “totally unaware”. The more subtle parts are not at all obvious, and plenty of very smart people who I have a lot of respect for still miss them. Missing the full extent of it is the default.
Yes, I am aware that this is what your intention is, and that from your perspective it looks like the status nonsense is already there because of the way he is going about things. Still, “drawing attention to [my accusation that] this is what the other guy is doing” doesn’t get you out of status issues. It focuses your attention on “How much status does this guy deserve, and do I really have to accept a low status position as a ‘muggle’ that needs him to ‘enlighten’ me”, and in doing so distracts from and obscures “Is there something true and useful that he is attempting to communicate?”—which is is the right way to determine who gets the “status” benefits of being listened to anyway.
Coincidentally, just this morning I wrapped up a conversation with a friend that very much relates to what we’re talking about here. It took me a while to convey to her how expect-2 is actually just expect-1 in disguise, but in the end it changed her perspective from one where achieving the “should” was out of the question in her mind to the one which mirrors times she’s been successful with similar things in the past and she can anticipate success—and in a way that I think fits Chapman’s concept of a failure of rationality and success with metarationality (at least, if I’m understanding Chapman correctly). Also tied in there is a bit about my own struggles with pretty much the same problem, and how I only got out of it by interpreting my own (correct!) thoughts of “This other guy is doing status bullshit” by treating it as a warning sign of my own imperfections there, and setting aside the semantic stop-signs I had been resting on.
It’s a little difficult to explain, but if you’re interested in hearing a real world example I’ll try to see if I can spell it out in a way that makes sense.
What does it mean for a word to fundamentally be a semantic stop sign?
As gjm noted, after using the word he elaborated on it:
If I were to say “Bill Gates should put some of his money into AI safety”, I take it you think “should” is being a semantic stop sign. If I say “the world would be a better place according to my values if Bill Gates were to put some of his money into AI safety”, is there still a semantic stop sign there? Do you claim that’s not what I meant by “should”?
A word is fundamentally a semantic stopsign when the whole purpose of the word is to cover up detail and allow you to communicate without having to address what’s underneath.
As I mentioned before, this isn’t always a problem. If someone says “I just realized I should pee before we leave”, and then goes and pees, then there really isn’t an issue there. We can still look more closely, if we want, but we aren’t going to find anything interesting that changes the moral of the story. They realized that if they don’t pee before leaving, they will end up with a full bladder and no convenient way to empty it. Does it mean they would otherwise have to waste time pulling over? That they won’t have a chance and will pee themselves? It doesn’t really matter, because it is sufficiently clear that it’s in everyone’s best interest to let the guy go pee before embarking on the road trip with him. The right answer is overconstrained here.
Similarly, “Bill should donate” can be unproblematic—if the details being glossed over don’t change the story. Sometimes they do.
If you say “Bill gates should!” and then say “Well, what I really mean by that is… that I would like it, personally, because it would make the world better according to my values”, then that changes things drastically. “Should!” has a moral imperative that “I’d personally like...” simply does not—unless it somehow really matters what you’d like. Once you get rid of “should” you have to expose the driving force behind any imperative you want to make. Is it that you just want his money for yourself? Do you have a well thought out moral argument that Bill should find compelling? If so, what is it, and why should Bill find it compelling?
Very frequently, people will run out of justification before their point becomes compelling. I have a friend, for example, who thinks “Health care ‘should’ be ‘free’”, and who gets quite grumpy if you point out her lack of an actual argument. Fundamentally, what she means is “I want healthcare, and I don’t want to pay for it”, but saying it that way would make it way too obvious that she doesn’t actually have a compelling reason why anyone should want to pay for her healthcare—so she sticks with “it should be free”. This isn’t a political statement, btw, since I’m not saying that good arguments don’t exist, or that “health care ‘shouldn’t’ be ‘free’”. It’s just that she wants the world to be a certain way that would be convenient to her, and the way things currently are violate a “fairness” intuition she has, so she’s upset about it without really understanding what to do about it. She doesn’t see any reason that everyone else would feel compelling, and so she moralizes in the hopes that the justification is either intuitively obvious to everyone else, or else that people will care that she feels that way.
And that’s an empirical prediction. If you say “You should do X” and “expect-2“ at them to do it, and they do, then clearly your moralizing had sufficient force and you were right to think you could stop at that level of detailed support. If you start expecting at your dog to sit, and you’ve never taught it to sit, then there’s just no way to fill in the details behind “the dog should sit” which make any sense. “The world would be better if it sat”—sure, let’s grant that. What do you think you’re accomplishing by announcing this fact instead of teaching the dog to sit? Notice how that statement is oddly out of place? Notice how the “should” and “expectation-2” kinda deflate once you recognize that the expectation will necessarily be falsified?
Returning to the “irrational fear” example, the statement is “I shouldn’t be afraid”. If you follow that up with a lack of fear, then fine. Otherwise, you have a contradiction. Get rid of the “should”, and see what happens. “I shouldn’t be afraid” → “I feel fear even though there is no danger”. Oh, there’s no danger? Now that you’re making this claim explicitly, how do you know? How come it looks like you think you’re going to splat your head on the concrete below? What does your brain anticipate happening, and why is it wrong? Have you checked to make sure the concrete anchors have been set properly? Are you sure you aren’t missing something that could lead to your head splatting on the concrete below?
When you can confidently answer “Yes, I have checked the knots, I have checked the anchors, and there is no way my head will splat on the concrete below. My brain was anticipating falling without any actual cause, and I can see now that there are no paths to that outcome”, then how do you maintain the fear? What are you afraid of, if that can’t happen? It’s like trying to say “I don’t believe it’s raining, but it is raining”. Even if you feel the same “fear” sensations, once you’re confident that they don’t mean anything it just becomes “I feel these sensations which don’t mean anything”. Okay, so what? If you’re sure they don’t mean anything then go climb. We call that “excitement”, btw.
When you “should” or “expect” at a thing, and your expectations are being falsified rather than validated, then that’s the cue to look deeper. It means that the stuff being buried underneath isn’t working out like you think it should, so you’re probably wrong about something. If you’re trying and failing to rock climb without fear, it probably means you were flinching away from actually addressing the dangers and that you need to check your knots, check whether you’ve done enough checking, and then once you do that you will find yourself climbing without being burdened by fear. If you’re trying to say that someone should do something you want them to do and they aren’t doing it, it probably means you have a gap in your model about why they would care or else how they would know—and once you figure that out you’ll find yourself happily explaining more, or creating a reason for them to care, or realizing that your emotions had you acting out of line—depending on the case at hand.
That sorta make sense? I know it’s a bit far from intuitive
It makes sense as, like, a discussion of “this is sometimes what’s going on when people use the word should”. I’m far from convinced that that’s always what’s going on, or that it’s what’s going on in this particular situation.
Like, I feel like you’re taking something that should be at the level of “this is a hypothesis to keep in mind” and elevating it to “this is what’s true”.
(Oh hey, I used “should”. What do I mean by that? I guess kind of the same as if I said “if you add this list of numbers, you should get zero”. That is, I feel like you’re making a mistake to hold this thing at a level of confidence different from what I think is the correct level of confidence. Was there more to my “should” than that? Quite possibly, but… I feel like you’re going to have a confident prediction about what that was? And I want to point out that while it’s possible you know better than me what’s going on inside my head, it’s not the default guess.)
I guess I also want to point out that the sequence of events here is, in part:
Richard says a thing.
TAG and then yourself use the word “expect” to suggest Richard was being unreasonable.
gjm uses the word “should” in a reply to your “expect” to suggest Richard was perhaps being reasonable after all.
Big discussion about the words “expect” and “should”.
Notably, Richard never used either of those words himself. So for example, you say “When you “should” or “expect” at a thing, and your expectations are being falsified rather than validated, then that’s the cue to look deeper.” Well, is Richard “should”ing or “expect”ing? I could believe he’s doing the thing just with different words, but I don’t think it’s been established that he is, and any discussion of “this is what these words mean” is completely besides the point.
Not that I have anything against long besides-the-point digressions, but I do think it’s good for everyone to be aware that’s what they are.
(Gonna limit myself to two more replies after this, and depending on motivation I might not even do that many.)
The hypothesis “The word ‘should’ is being used to allow communication while motivatedly covering up detail that is necessary to address” is simply one hypothesis to keep in mind, and doesn’t apply to every use of the word “should”. However “The word still functions to allow communication without having to get into further detail” is just something that is always true. What would a counterexample even look like?
I’ve tried to be explicit in the last two comments that this isn’t always a bad thing. Your use of the word “should” here seems pretty reasonable to me. That doesn’t mean that there isn’t more detail being hidden (mistake according to what values?), just that we more or less expect that the remaining ambiguity isn’t likely to be important so stopping at this level of precision is appropriate.
I feel like I’m kinda saying the same thing as last time though. Am I missing what your objection is? Do you see why “semantic stopsign” shouldn’t be seen as a boo light?
This does highlight a potential failure mode though. Determining which level of confidence is “correct” for someone else requires you to actively know something about what they’ve seen and what they’d be able to see. It’s pretty hard to justify until you can see them failing to see something.
Not in this case, no. Because it’s a fairly reasonable use there’s no sign of failure, and therefore nothing to suggest what you might be doing without realizing you’re doing.
If your answer to 54+38 is 92, I don’t have any way of knowing how you got there other than it worked. Maybe you used a calculator, or maybe you had a lucky guess. If you say 82, then I can make an educated guess that you used the method they teach in elementary school and forgot to carry the one. If you get 2052, I can guess pretty confidently that you used a calculator and hit the “x” button instead of the “+” button.
“Knowing what’s going on inside someones head better than they do” just means recognizing failure modes that they don’t recognize.
Richard is not on trial. He didn’t do anything anti-social that calls for such a trial. It would be presumptuous to speak for him, and inappropriately hostile to accuse him. I’m uncomfortable with the implication that this is what the “point” is, and perhaps should have disclaimed this explicitly early on. Heck, his main point isn’t even wrong or unreasonable.
It’s just that the fact that he felt it to be necessary to say suggests something about his unspoken expectations—because Chapman’s/TAG’s expectations wouldn’t have led to that comment feeling relevant even though it’s still fairly true. Productive conversation requires addressing the actual disagreement rather than talking past each other, and when those disagreements are buried in the underlying expectations, this means pointing the conversation there. That’s why TAG basically asked “What do you expect?”, and why it was the correct thing to do given the signs of expectation mismatches. Gjm responded to this by saying that “expect doesn’t always mean expect”, which is an extremely common misconception, and understanding why that is wrong is important—not just for the ability to have these kinds of discussions productively, but also that.
Hm. So I thought you were referring to a word like “cult” or “fascist”, where part of what’s going on is mixing normative and descriptive claims in a way that obfucates. But now it seems you might have meant a word like “tree” or “lawyer” or “walk”; that is, that just about every word is a semantic stopsign?
And then when you say “stop allowing “should” to function as a semantic stop sign”, you mean “dig below the word “should” to its referent”, much as I might do to explain the words “tree” or “lawyer” or “walk” to someone unfamiliar with them?
But as gjm and I have both noted, he did that. Like, it sounds like that part of the conversation went: “you should do X” / “I did X” / “this isn’t about you, no one does X”.
I confess I do not understand this.
(It’s not that I thought you’d necessarily see my “should” above as bad. I just expected you’d think there was more going on with it than I thought was going on with it.)
Not sure if this part is particularly relevant given my apparent misunderstanding of “semantic stopsign”. But to be clear, I meant a factual mistake.
Well, yes, and like I said it’s possible. And in the example you gave, I agree those would be good guesses.
But also, it seems common to me that someone will think they recognize someone a failure mode in someone else, that that person doesn’t recognize; and that they’ll be wrong. Things like, “oh, the reason you support X is because you haven’t read Y”. Or “I would have liked that film if I hadn’t noticed the subtext; so when you say you liked that film, you must not have noticed the subtext; so when I point out the subtext you will stop liking the film”.
Something you said pattern matches very strongly to this, for me: “Heh, I understand that perspective. … I distinctly remember the conversation where I was insisting that...”.
It’s entirely possible that your framing there is priming me to read things into your words that you aren’t putting there yourself, and if I’m doing that then I apologize.
Well, honestly I’m still not convinced this is wrong. I had thought that getting into “should” might help clear this up, but it hasn’t, so.
It just doesn’t seem at all problematic to me that “I expect you to show up on time from today on” and “I expect the sun to rise tomorrow” are using two different senses of “expect”. It sounds like you think something like… if the speaker of the first one meditates on wanting, they will either anticipate that the listener will show up on time, or they will stop caring whether the listener shows up on time? I’m guessing that’s not a good description of what you think, but that’s what I’m getting.
Now to be clear, I wouldn’t describe this “expect” the same way gjm described “expect-2″. He made the distinction: “you expect-1 X when you think X will probably happen; you expect-2 X when you think that X should happen”. And I think I’d change expect-2 to something like “when you think that X should happen, and are low-key exercising some authority in service of causing X to happen”. Like “I expect you to show up on time” sounds to me like an order backed by corporate hierarchy, and “England expects every man will do his duty” sounds like an order backed by the authority of the crown. “I expect open source developers to be prompt at responding to bug reports” sounds like it’s exercising moral authority. And if we make this distinction, then it does not seem to me like Richard was expect-2ing anything of Chapman.
But that doesn’t seem particularly relevant, because:
So I agree this seems like the sort of thing that’s important-in-general to know about. If the word “expect” has only one common meaning, then I certainly want to know that; and if it has two, then I expect you want to know that.
But it still doesn’t seem like it matters in this specific case. This conversation stems from hypothesizing about what’s going on inside Richard’s head, and he didn’t use the word in question. So like,
“Richard is expecting ___” / “That seems like a fine thing for him to do, because “expect” can also mean ___” / “No it can’t, because...”
It seems like the obvious thing to do here, if we’re going to hypothesize about what’s going on in Richard’s head, is to just stop using the word “expect”? Going into “what does “expect” mean” seems like the opposite of productive disagreement.
For sake of brevity I’m going to respond just to the parts I see as more likely to be fruitful, but feel free to demand a response to anything I skip over and I’ll give one.
Yes, closer to the latter. There’s always more underneath, even with words like “tree”.
However, the word “should” is a bit different, in ways we touch on below.
Hm, okay.
And maybe there is. “Factual mistake” isn’t perfectly defined either. We could get further into the ambiguities there, but it’s all going to feel “yeah but that doesn’t matter” because it doesn’t. It’s defined well enough for our purposes here.
The important distinction to track here is whether the person is closing the loop or just saying whatever first comes to mind with no accountability. When the prediction fails, is there surprise and an update? Or do the goalposts keep moving and moving? The latter is obviously common and almost always leads to wrongness, but that isn’t a mark on the former which actually works pretty well.
I can see what you mean, but it’s pretty different. Fixed goal posts, consistent experience of observing minds changing when they reach them, known base rates, calibrating to the specifics of what is and isn’t said, etc. I can explain if you want.
No worries. I can tell you’re working in entirely good faith here. I’m not confident that I can convey what I’d like to convey in the amount of effort you’re willing to put into this conversation, but if I can’t it’s definitely because I’ve failed to cross the inferential distance and not because your mind isn’t open.
“Convinced” is a high bar, and we have spent relatively few words on the topic. Really grokking this stuff requires “doing” rather than just “talking about”. Meaning, actually playing one or both sides of “attempting to hold onto the frame where a failing ‘should’/‘expect-2’ is logically consistent and not indicative of wrongness” and “systematically tearing that frame apart by following the signs to the missing truth”. And then doing it over and over over a wide range of things and into increasingly counterintuitive areas, until the idea that “Maybe this time it’ll be different!” stops feeling realistic and starts feeling like a joke. Working through each example usually takes an hour or two of back and forth until all the objections are defeated and the end result recognized as inevitable rather than merely “plausible”.
I’d count it a success if you walk away skeptical, but with a recognition that you can’t rule it out either, and a good enough sketch that you can start filling in the details.
Yes! Not quite, but close!
So yes, those two are different. And yes, “low-key exercising authority” is a key distinction to make here. However, it’s not the case that expecting your employee to show up on time is simultaneously “low-key exercising authority” AND “not a prediction”. It’s either still a prediction, or it’s not exercising authority. The mechanism of exercising authority is through predicting people will do as you direct them to, and if you lose that then you don’t actually have authority and are simply engaging in make believe.
This is a weird concept, but “intentions” and “expectations” are kinda the same thing related to differently. This is why your mom could tell you “You are going to start behaving right now!*” and you don’t get confused why she’s giving an order as if it’s a prediction. It’s why your coach in high school would say “You have to believe you can win!”, and why some kids really did choke under pressure and under-perform relative to what they were otherwise capable of. When it comes to predicting whether you’ll get a glass of water when you’re thirsty, you can trivially realize either prediction, so you solve this ambiguity by choosing to predict you’re going to get what you want and act so as to create that reality. If you want to start levitating a large object with your mind, you can’t imagine that working so it gets really hard to even intend to do it. That’s the whole “use the try harder Luke” stuff. When it gets hard to expect success, it gets hard to even try. (Scott’s writing on “predictive processing” touches on this equivalence)
If you’ve been trying to low-key authority at someone to show up on time, and then you start looking real closely at what you’re doing, one potential outcome is that you simply anticipate they’ll show up, yes. In this case, it’s like… think of the difference between “I expect myself to get up and run a mile today” when you really don’t wanna and you can feel the tension that exercising authority is creating and you’re not entirely sure it’ll keep working… and then compare that to what it feels like when “run a mile” is just what you do, like getting a glass of water when you’re thirsty, or brushing your teeth in the morning (hopefully). It may still suck, and you may not *like* running, but you notice your feet start walking out the door almost “on their own” because “not running” isn’t actually a thing anymore. Any tension there is the uncertainty you’re trying to deny in order to insist reality bend to your will, and when you look closely and find out that it’s definitely gonna happen, it goes away because you’re not uncertain anymore.
In the other extreme when you find that it’s definitely not gonna happen, you “stop caring” in the sense that you no longer get bothered by it, but not in the sense that you’d no longer appreciate the guy showing up on time, and not in the sense that you stop exerting optimization pressure in that direction. It actually frees you up to optimize a lot more in that direction, because you’re no longer navigating by a bad map, you no longer come off as passive aggressive or aggressive and lacking in empathy, and you’re not bound to expecting success before you do something. So for example, that recent case I referenced involved me feeling annoyed by an acquaintance’s condescending douchery. Once I looked at where I was going wrong (why he was the way he was, why my annoyance had no authority over him, etc), I no longer “cared” in the sense that his behavior didn’t annoy me anymore. But also, that lack of annoyance opened up room for me to challenge and tease him without it being perceived as (or being) an ego threat, and now I actually like the guy and rather than condescending to me he regularly asks for my input on things (even though his personality limitations are still there).
In the middle, you realize you don’t actually know whether or not they’re going to start showing up on time. Instead of asserting “I expect!” hoping for the best, you realize that you can’t “decide” what they do, but they can, so you ask: “Do you think you’re going to start coming in on time?”. And you wait for an answer. And you look to see what this means they will actually do. This feels very different from the other side. Instead of feeling like you’re being projected at, it feels like you’re being seen. You can’t just “say” you will, because your boss is no longer looking to see if you’ll prop up his semi-delusional fantasy a little longer; he’s looking to see what you will do. Instead of being pushed into a role where you grumble “Yes sir..” because you have no choice and having things happen to you that are out of your control, the weight of the decision is on your shoulders, and you feel it. Are you going to start showing up on time?
I’m not going to demand anything, especially when I don’t plan to reply again after this. (Or… okay, I said I’d limit myself to two more replies but I’m going to experiment with allowing myself short non-effortful ones if I’m able to make them. Like, if you want to ask questions that have simple answers I’m not going to rule out answering them. But I am still going to commit to not putting significant effort into further replies.)
But the thing that brought me into this conversation was the semantic stop sign thing. It still seems to me like that part of the conversation went “you should do X” / “I did X” / “this isn’t about you, no one does X”. And based on my current understanding of what you meant by “semantic stopsign”, I agree that gjm didn’t do X, and it feels like you’ve ignored both him and myself trying to point this out.
I expect there’s a charitable explanation for this, but I honestly don’t have one in mind.
Mm, I think I know what you mean, but… I don’t think I trust that you’re at that level?
So, okay, I have to remember here that this thread originally came up when I felt like you’d think you knew better than me what was in my own head, and then fair play to you, you didn’t. But then you defended the possibility of doing that, without having done it in that specific case. So this bit has to be caveated with a “to the extent that you actually did the thing”, which I kind of think you did a bit with gjm and Richard but I’m not super sure right now.
As I said, I agree it’s possible to know what’s going on in someone else’s mind better than they are. I agree that the things you say here make it more likely than otherwise.
But at best they’re epistemically illegible; you can’t share the evidence that makes you confident here, in a way that someone else can verify it. And it’s worse than that, because they’re the sort of thing I feel like people often self-delude about. Which is not to say you’re doing that; only that I don’t think I can or should rule out the possibility.
So these situations may seem very different to you, and you may be right. But as a reader, they look very similar, and I think I’m justified in reacting to them similarly.
There are ways to make me more disposed to believe you in situations like this, which I think roughly boil down to “make it clear that you have seen the skulls”. I’ve written another recent comment on that subject, though only parts of it are relevant here.
No good thing to quote here, but re expecting: I feel like you’re saying “these are the same” and then describing them being very different.
So sure, I expect-2 my employee to show up on time, and then I do this mental shift. Then either I expect-1 him to show up on time; or I realize I don’t expect-1 him to show up on time, and then I can deal with that.
And maybe this is a great mental shift to make. Actually I’d say I’m pretty bullish on it; this feels to me more like “oh, you mean that thing, yeah I like that thing” than like “oh, that’s a thing? Huh” or “what on earth does that mean?”
So I don’t think the point of friction here is about whether or not I understand the mental shift. I think the point of friction is, I have no idea why you described it as “there’s only one kind of expect”.
Like… even assuming I’ve made this mental shift, that’s not the same as just expect-1ing him to show up on time? This feels like telling me that “a coin showing heads” is the same as “a coin that I’ve just flipped but not yet looked at”, because once I look I’ll either have the first thing or I’ll be able to deal with not having it. Or that “a detailed rigorous proof” is the same as “a sketch proof that just needs filling out”, because once I fill out the details I’ll either have the first thing or I’ll be able to deal with the fact that my proof was mistaken.
And that’s from the perspective of the boss. Suppose I’m the employee and my boss says that to me. I can’t make the mental shift for her. It probably wouldn’t go down very well to ask “ah, but do you predict that I’ll show up on time? Because if you don’t, then you should come to terms with that and work with me to...”
Maybe if my boss did make this mental shift, then that would be good for me too. But given that she hasn’t, I kind of need to know: when she used the word “expect” there, was that expect-1 or expect-2? Telling me about a mental shift she could make in the way she relates to expectations seems unhelpful. Telling me that the two kinds of expectations are the same seems worse than useless.
“Demand” is just a playful way of saying it. Feel free to state that you think what I skipped over is important as well. Or not.
I’m confused. I assume you meant to say that you agree with gjm that he *did* do X, and not that you agree with me that he didn’t?
Anyway, “You should do X”/”I did X”/”No one does X” isn’t an accurate summary. To start with, I didn’t say he *should* do anything, because I don’t think that’s true in any sort of unqualified way—and this is important because a description of effects of a type of action is not an accusation while the presupposition that he isn’t doing something he should be doing kinda is. Secondly, the thing I described the benefits of, which he accused me of accusing him of not doing, is not a thing I said “no one does”. Plenty of people do that on plenty of occasions. Everyone *also* declines to do it in other cases, and that is not a contradiction.
The actual line I said is this:
Did he “look closely” and “stop allowing ‘should’ to function as a semantic stop sign”? Here’s his line:
He did take the first step. You could call it two, if you want to count “this specific person is the one who should make it happen” as a separate step, but it’s not a sequential step and not really relevant. “This should happen”->”the world would be better if it did” is the only bit involving the ‘should’, and that’s a single step.
Does that count as “looking closely”? I don’t see how it can. “Looking at all”, sure, but I didn’t say “Even the most cursory look possible will reveal..”. You have to look *closely*. AND you have to “stop allowing ‘should’ to function as a semantic stopsign”.
He did think “What do I mean by that?”, and gave a first level answer to the question. But he didn’t “stop using should as a stop sign”. He still used “should”, and “should” is still a stop sign. When you say “By ‘should’, I mean ____”, what you’re doing is describing the location of the stop sign. He may have moved it back a few yards, but it’s still there, as evidenced by the fact that he used “should” and then attributed meaning to it. When you stop using should as a stopsign, there’s no more should. As in “I don’t think Chapman ‘should’ do anything. The concept is incoherent”.
It’s like being told “This thing you’re in is an airplane. If you open the throttle wide, and you resist the temptation to close it, you will pick up speed and take off”, and then thinking you’ve falsified that because you opened the throttle for three seconds seconds and the plane didn’t take off.
In general it’s better to avoid talking about specific things people have done which can be interpreted as “wrong” unless you have an active reason to believe that focus will actually stay on “is it true?” rather than “who loses status if it’s true”—or unless the thing is actually “wrong” in the sense that the behavior needs to be sanctioned. It’s not that things can’t get dragged there anyway if you’re talking about the abstract principles themselves, but at least there’s a better chance of focus staying on the principles where it should be.
I was kinda hoping that by saying “Takeoff distance is generally over a quarter mile, and many runways are miles long”, you’d recognize why the plane didn’t take off without needing to address it specifically.
Well, I was pretty careful to not comment on what Richard and gjm were doing. I didn’t accuse gjm of anything, nor did I accuse Richard of anything. I see what TAG saw. I also saw gjm respond to my “self-predictably false expectation is a failure of rationality” in the way that someone would respond if they weren’t aware of any reason to believe that other than a lack of awareness of the perspective that claims “there’s two senses of the word ‘expect’” is a solution—and in a way that I can’t imagine anyone responding if they were aware of the very good reasons that can coexist with that awareness.
I think those pieces of evidence are significant enough that dismissing them as meaningless is a mistake, so I defended TAGs decision to highlight a potential problem and I chose to highlight another myself. Does it mean that they *were* doing the things that this interpretation of the evidence points towards? Not necessarily. I also didn’t assert anything of the sort. It’s up to the individual to figure out how likely they think that is.
If, despite not asserting these things, you think you know enough about what’s going on in my mind that you can tell both my confidence level and how my reasoning doesn’t justify it, then by all means lemme know :P
I mean, not *trivially*, yeah. Such is life.
For sure, it’s definitely a thing that can happen and you shouldn’t rule it out unless you can tell that it’s not that—and if you say you can’t tell it’s not that, I definitely believe you. However, “it’s just self delusion” does make testable predictions.
So for example, say I claim to be able to predict the winning lottery numbers but it’s really just willful delusion. If you say “Oh that’s amazing! What are tomorrows numbers?”, then I’m immediately put to the choice of 1) sticking my neck out, lying, and putting a definite expiration date on having my any BS taken seriously, 2) changing my story in “unlikely” ways that show me to be dodging this specific prediction without admitting to a general lack of predicting power (“Oh, it doesn’t work on March 10ths. Total coincidence, I know. Every other day though..”), or 3) clarifying that my claims are less bold than that (“I said I can predict *better than chance*, but it’s still only a ~0.1% success rate”), and getting out of having my claims deflated by deflating them myself.
By iterating these things, you can pretty quickly drive a wedge in that separates sincere people from the delusional—though clever sociopathic liars will be bucketed with the sincere until those expiration dates start arriving. It takes on order n days to bound their power to predicting at most 1/n, but delusion can be detected as fast as anticipations can be elicited.
Well, you’re justified in being skeptical, for sure. But there’s an important difference between “Could be just self delusion, I dunno..” and “*Is* just self delusion”—and I think you’d agree that the correct response is different when you haven’t yet been able to rule out the possibility that it’s legit.
For sure, there are skulls everywhere. The traps get really subtle and insidious and getting comfortable and declaring oneself “safe” isn’t a thing you ever get to do. However, it sounds like the traps you’re talking about are the ones along the lines of “failing to even check whether you anticipate it being true before saying “Pshh, you’re just saying that because you haven’t read Guns Germs and Steel. Trust me bro, read it and you’ll believe me”″ -- and those just aren’t the traps that are gonna get ya if you’re trying at all.
My point though was that there are successes everywhere too. “Seeing someone’s mind do a thing that they themselves do not see” is very very common human behavior, even though it’s not foolproof. In fact, a *really good* way to find out what your own mind is doing is to look at how other people respond to you, and to try to figure out what it is they’re seeing. That’s how you find things that don’t fit your narrative.
I get your distaste for that kind of comment, and I agree that there’s ways Val could have put in more effort to make it easier to accept. At the same time, recoiling from such things is a warning sign, and “nuggets of wisdom from above” is the last thing you want to tax.
I still remember something Val said to me years ago that had a similar vibe. In the end, I don’t think he was right, but I do think he was picking up on something and I’m glad he was willing to share the hypothesis. Certainly some other nuggets have been worth the negligible cost of listening to them.
Because there’s only one kind of expect. There’s “expecting”, and there’s “failing to expect, while pretending to be expecting and definitely not failing”. These are two distinct things, yes. Yet only the former is actually expecting.
It can seem like “I expect-2, then I introspect and things change, and I come out of it with expect-1”. As if “expect-2″ is a tool that is distinct from expect-1 and sometimes the better tool for the job, but in this case you set the former down and picked up the latter. As if in *this case* you looked closer and thought “Oh wow, I guess I was mistaken! That’s a torx bolt not an allen bolt!”.
There’s *another* mental shift though, on the meta level, which starts to happen after you do this enough.
So you keep reaching for “expect-2”, and it kinda sorta works from time to time, but *every time* you look closer, you think “Ah, this is another one of those cases where an expect-2 isn’t the right tool!”. And so eventually you start to notice that it’s curiously consistent, but you think “Well, seeing a bunch of white swans doesn’t disprove the existence of black swans! I just haven’t found the right job for this tool yet!”—or rather “All the right jobs are coincidentally the ones I haven’t examined in much detail! Because they’re so obvious!”.
Eventually you start to notice that there’s a pattern to it. It’s not just “This context is completely different, the considerations that determine which tool to use are completely different, and what a coincidence! The answer still points the same way!”. It’s “Oh, I followed the same systematic path, and ended up with the same realization. I wonder if maybe there’s something fundamental going on here?”. Eventually you get to the point where you start to look at the path itself, and recognize that what you’re doing is exposing delusion, and the things which tell you what step to take next are indicators of delusion which you’ve been following. Eventually you notice that the whole “unique flavor” that *defined* “expect-2″ is actually the flavor of delusion which you’ve been seeking out and exposing. And that the active ingredient in there, which made it kinda work when it did, has been expect-1 this whole damn time. It’s not “a totally different medicine”. It’s the same medicine mixed with horseshit.
At some point it becomes a semantic debate because you can define a sequence of characters to mean anything—if you don’t care about it being useful or referring to the same thing others use it to refer to. You could define “expect-2” as “expect-1, mixed with horse shit, and seen by the person doing it as a valid and distinct thing which is not at all expect-1 mixed with horse shit”, but it won’t be the same thing others refer to when they say “expect-2”—because they’ll be referring to a valid and distinct thing which is not at all expect-1 mixed with horse shit (even though no such thing exists), and when asked to point at “expect-2″ they will point at a thing which is in fact a combination of expect-1 and horseshit.
Expectations will shift. To start with you have a fairly even allocation of expectation, and this allocation will shift to something much more lopsided depending on the evidence you see. However, it was never actually in a state of “Should be heads, dammit”. That wasn’t a “different kind of expectation, which can be wrong-1 without being wrong-2, and was 100% allocated to heads”. Your expectation 1 was split 50⁄50 between heads and tails, and you were swearing up and down that tails wasn’t a legitimate possibility because you didn’t want it to be. That is all there is, and all there ever was.
Ah, but look at what you’re doing! You’re talking about telling your boss what she “should” do! You’re talking about looking away from the fact that you know damn well what she means so that you can prop up this false expectation that your boss will “come to terms with that”! *Of course* that’s not going to work!
You want to go in the opposite direction. You want to understand *exactly* what she means: “I’m having trouble expecting you to do what I want. I’m a little bothered by that. Rather than admit this, I am going to try to take it out on you if you don’t make my life easier by validating my expectations”. You want to not get hung up at the stage of “Ugh, I don’t want to have to deal with that”/”She shouldn’t do that, and I should tell her so!”, and instead do the work of updating your own maps until you no longer harbor known-false expectations and attach desires to possibilities which aren’t real.
When you’ve done that, you won’t think to say “You should come to terms with that” to your boss, even if everyone would be better off if she did, because doing so will sound obviously stupid instead of sounding like something that “should” work. What you choose to say still depends on what you end up seeing but whatever it is will feel *different* -- and quite different on the other side too.
Imagine you’re the boss putting on your serious face and telling an employee that you expect them to show up on time from now on. It’s certainly aggravating if they say “Ah, but do you mean that? You should work on that!”. But what if you put your serious face on, you say to them “Bob, I noticed that you’ve been late a couple times recently, and I expect you to be on time from now on”, and in response, Bob gives you a nice big warm smile and exclaims “I like your optimism!”.
It still calls out the same wishful thinking on the bosses part, but in a much more playful way that isn’t flinching from anything. Sufficiently shitty bosses can hissy fit about anything, but if you imagine how *you* would respond as a boss, I think you’d have a hard time not admitting to yourself “Okay, that’s actually kinda funny. He got me”, even if you try to hide it from the employee. I expect that you’d have a real hard time being mad if the employee followed up “I like your optimism!” with a sincere “I expect I will too.”. And I bet you’ll be a little more likely to pivot from “I expect!” towards something more like “It’s important that we’re on time here, can I trust that you won’t let me down?”.
I think the opposite. If the majority of LessWrongians were following meta-rules like “neat models don’t always apply to messy problems” or “you have to remember that every real computer and every real human is finite” or “mathematically true doesn’t imply real-world true”, then they would reject Aumann’s theorem AND Solomonoff induction AND Bayes as they are understood and promoted here. But clearly the majority don’t reject all three. (And they may well be applying meta-rationality correctly to areas that are less tribally totemic).
And there is a straightforward answer to that. (I am familiar with the University of Cambridge.) There is no need for any mystification.
That isn’t a maximally straightforward answer, because you have to start by pointing out that the framing of the question is wrong: Cambridge university is not in some specific location within Cambridge. Reframing is a sticking point for some people … if you haven’t answered the question as stated, you haven’t answered it, in their view.
Having got past that, you can only give examples.Here’s one
gjm’s two example answers — one that is useful in the context of the tourist’s question and one that is useless — illustrate the situation excellently.
Beginning by telling the questioner that their question is wrong is not useful, because away is not a direction. There’s no point telling them to “get out of the car” and refusing to engage further until they do. That will only stroke one’s own smugness.
Begin instead with what is true. Go on with more things that are true.
But Cambridge university is away from any point in the middle of Cambridge … so that is true. And pointing at one particular college is useful. There’s no useful way to point in every direction simultaniously.
You can’t just tell the truth relentlessly. You can only speak or write words, which are subject to the readers or listeners interpretation
If someone doesn’t understand what you are saying, then you can either go up to a meta level or give up.
When someone asks me, “What is X?”, I automatically rephrase the question as “Tell me what you know about X that is relevant and significant in the present context.” Up, down, sideways is not the point. I tell them whatever I can that seems to me that they do not know and need to know, and steer according to their response.
Well, meta rationality helps you be less wrong.