Adam Zerner
Many years after having read it, I’m finding that the “Perils of Interacting With Acquaintances” section in The Great Perils of Social Interaction has really stuck with me. It is probably one of the more useful pieces of practical advice I’ve come across in my life. I think it’s illustrated really well in this barber story:
But that assumes that you can only be normal around someone you know well, which is not true. I started using a new barber last year, and I was pleasantly surprised when instead of making small talk or asking me questions about my life, he just started talking to me like I was his friend or involving me in his conversations with the other barber. By doing so, he spared both of us the massive inauthenticity of a typical barber-customer relationship and I actually enjoy going there now.
I make it a point to “be normal” around people and it’s become something of a habit. One I’m glad that I’ve formed.
I get the sense that autism is particularly unclear, but I haven’t looked closely enough at other conditions to be confident in that.
Something I’ve always wondered about is what I’ll call sub-threshold successes. Some examples:
A stand up comedian is performing. Their jokes are funny enough to make you smile, but not funny enough to pass the threshold of getting you to laugh. The result is that the comedian bombs.
Posts or comments on an internet forum are appreciated but not appreciated enough to get people to upvote.
A restaurant or product is good, but not good enough to motivate people to leave ratings or write reviews.
It feels to me like there is an inefficiency occurring in these sorts of situations. To get an accurate view of how successful something is you’d want to incorporate all of the data, not just data that passes whatever (positive or negative) threshold is in play. But I think the inefficiencies are usually not easy to improve on.
In A Sketch of Good Communication—or really, in the Share Models, Not Beliefs sequence, which A Sketch of Good Communication is part of—the author proposes that, hm, I’m not sure exactly how to phrase it.
I think the author (Ben Pace) is proposing that in some contexts, it is good to spend a lot of effort building up and improving your models of things. And that in those contexts, if you just adopt the belief of others without improving your model, well, that won’t be good.
I think the big thing here is research. In the context of research, Ben proposes that it’s important to build up and improve your model. And for you to share with the community what beliefs your model outputs.
This seems correct to me. But I’m pretty sure that it isn’t true in other contexts.
For example, I wanted to buy a new thermometer recently. Infrared ones are convenient, so I wanted to know if they’re comparably accurate to oral ones. I googled it and Cleveland Clinic says they are. Boom. Good enough for me. In this context, I don’t think it was worth spending the effort updating my model of thermometer accuracy. In this context, I just need the output.
I think it’d be interesting to hear people’s thoughts on when it is and isn’t important to improve your models. In what contexts?
I think it’d also be interesting to hear more about why exactly it is harmful in the context of intellectual progress to stray away from building and improving your models. There’s probably a lot to say. I think I remember the book Superforecasters talk about this, but I forget.
Hm. On the one hand, I agree that there are distinct things at play here and share the instinct that it’d be appropriate to have different words for these different things. But on the other hand, I’m not sure if the different words should fall under the umbrella of solitude, like “romantic solitude” and “seeing human faces solitude”.
I dunno, maybe it should. After all, it seems that in different conceptualizations of solitude, it’s about being isolated from something (others’ minds, others’ physical presence).
Ultimately, I’m trusting Newport here. I think highly of him and know that he’s read a lot of relevant literature. At the same time, I still wouldn’t argue too confidently that his preferred definition is the most useful one.
That makes sense. I didn’t mean to imply that such an extreme degree of isolation is a net positive. I don’t think it is.
That makes sense. Although I think the larger point I was making still stands: that in reading the book you’re primarily consuming someone else’s thoughts, just like you would be if the author sat there on the bench lecturing you (it’d be different if it were more of a two-way conversation; I should have clarified that in the post).
I suppose “primarily” isn’t true for all readers, for all books. Perhaps some readers go slowly enough where they actually spend more of their time contemplating than they do reading, but I get the sense that that is pretty rare.
Cool! I have a feeling you’d like a lot of Cal Newport’s work like Digital Minimalism and Deep Work.
When I’m walking around or riding the train, I want to be able to hear what’s going on around me.
That makes sense about walking around, but why do you want to hear what’s going on around you when you’re riding the train?
Yeah, that all makes sense. I think solitude probably exists along a spectrum, where in listening to music maybe you have 8⁄10 solitude instead of 10⁄10 but in watching a TV show you only get 2⁄10. The relevant question is probably “to what extent are the outputs of other minds influencing your thoughts”.
Actually, now that I think about it, I wonder why we’re focusing on the outputs of other minds. What about other things that influence your thoughts? Like, I don’t know, bumble bees flying around you? I’m afraid of bumble bees so I know I’d have trouble focusing on my own thoughts in that scenario.
That said, I’m sure that outputs of other minds are probably a large majority of what is intrusive and prevents you from focusing on your own thoughts. But it still seems to me like the thing we actually care about is being able to focus on your own thoughts, not just reducing your exposure to the outputs of other minds.
Hm. I was actually assuming in this post that the podcasts in question were actually “Effective Information” as opposed to “Trivia” or “Mental Masturbation”. The issue is that even if they are “Effective Information”, you also need to have solitude in your “diet”, and the benefit of additional “Effective Information” probably isn’t worth the cost of less solitude.
But I’m also realizing now that much of the time podcasts aren’t actually “Effective Information” and are instead something like “Trivia” or “Mental Masturbation”. And I see that as a separate but also relevant problem. And I think that carbs is probably a good analogy for that too. Or maybe something like refined sugar. It’s a quick hedonic hit and probably ok to have in limited doses, but you really don’t want to have too much of it in your diet.
The claim is that it’s helpful, not that it’s necessary. I certainly agree that good ideas can come from low-solitude things like conversations.
But I think solitude also has lingering benefits. Like, maybe experiencing some solitude puts you in position to have productive conversations. On the other hand, maybe if you spend weeks in solitude-debt you’ll be in a poor position to have productive conversations. Something like that.
I would buy various forms of merch, including clothing. I feel very fond of LessWrong and would find it cool to wear a shirt or something with that brand.
No. DOGE didn’t cross my mind. It was most directly inspired by the experience of realizing that I can factor in the journey as well as the destination with my startup.
I think it can generate negative externalities at times. However, I think that in terms of expected value it’s usually positive.
In public policy, experimenting is valuable. In particular, it provides a positive externality.
Let’s say that a city tests out a somewhat quirky idea like paying NIMBYs to shut up about new housing. If that policy works well, other cities benefit because now they can use and benefit from that approach.
So then, shouldn’t there be some sort of subsidy for cities that test out new policy ideas? Isn’t it generally a good thing to subsidize things that provide positive externalities?
I’m sure there is a lot to consider. I’m not enough of a public policy person to know what the considerations are though or how to weigh them.
Pet peeve: when places close before their stated close time. For example, I was just at the library. Their signs say that they close at 6pm. However, they kick people out at 5:45pm. This caught me off guard and caused me to break my focus at a bad time.
The reason that places do this, I assume, is because employees need to leave when their shift ends. In this case with the library, it probably takes 15 minutes or so to get everyone to leave, so they spend the last 15 minutes of their shift shoeing people out. But why not make the official closing time is 5:45pm while continuing to end the employee’s shifts at 6:00pm?
I also run into this with restaurants. With restaurants, it’s a little more complicated because there are usually two different closing times that are relevant to patrons: when the kitchen closes and when doors close. Unless food is served ~immediately like at Chipotle or something, it wouldn’t make sense to make these two times equivalent. If it takes 10 minutes to cook a meal, doors close at 9:00pm, and someone orders a meal at 8:59pm, well, you won’t be able to serve the meal before they need to be out.
But there’s an easy solution to this: just list each of the two close times. It seems like that would make everyone happy.
I wonder how much of that is actually based on science, and how much is just superstition / scams.
In basketball there isn’t any certification. Coaches/trainers usually are former players themselves who have had some amount of success, so that points towards them being competent to some extent. There’s also the fact that if you don’t feel like you’re making progress with a coach you can fire them and hire a new one. But I think there is also a reasonably sized risk of the coach lacking competence and certain players sticking with them anyway, for a variety of reasons.
I’m sure that similar things are true in other fields, including athletics but also in fields like chess where there isn’t a degree you could get. In fields with certifications and degrees it probably happens less often, but I know I’ve dealt with my fair share of incompetent MDs and PhDs.
So ultimately, I agree with the sentiment that finding competent coaches might involve some friction, but despite that, it still feels to me like a very tractable problem. Relatedly, I’m seeing now that there has been some activity on the topic of coaching in the EA community.
What is specific, from this perspective, for AI alignment researchers? Maybe the feeling of great responsibility, higher chance of burnout and nightmares?
I don’t expect that the needs of alignment researchers are too unique when compared to the needs of other intellectuals. I mention alignment researchers because I think they’re a prototypical example of people having large, positive impacts on the world, as opposed to intellectuals who study string theory or something.
I was just watching this Andrew Huberman video titled “Train to Gain Energy & Avoid Brain Fog”. The interviewee was talking about track athletes and stuff their coaches would have them do.
It made me think back to Anders Ericsson’s book Peak: Secrets from the New Science of Expertise. The book is popular for discussing the importance of deliberate practice, but another big takeaway from the book is the importance of receiving coaching. I think that takeaway gets overlooked. Top performers in fields like chess, music and athletics almost universally receive coaching.
And at the highest levels the performers will have a team of coaches. LeBron James is famous for spending roughly $1.5 million a year on his body.
And he’s like, “Well, he’s replicated the gym that whatever team — whether it was Miami or Cleveland — he’s replicated all the equipment they have in the team’s gym in his house. He has two trainers. Everywhere he goes, he has a trainer with him.” I’m paraphrasing what he told me, so I might not be getting all these facts right. He’s got chefs. He has all the science of how to sleep. All these different things. Masseuses. Everything he does in his life is constructed to have him play basketball and to stay on the court and to be as healthy as possible and to absorb punishment when he goes into the basket and he gets crushed by people.
This makes me think about AI safety. I feel like the top alignment researchers—and ideally a majority of competent alignment researchers—should have such coaching and resources available to them.
I’m not exactly sure what form this would take. Academic/technical coaches? Writing coach? Performance psychologists? A sleep specialist? Nutritionist? Meditation coach?
All of this costs money of course. I’m not arguing that this is the most efficient place to allocate our limited resources. I don’t have enough of an understanding of what the other options are to make such an argument.
But I will say that providing such resources to alignment researchers seems like it should pretty meaningfully improve their productivity. And if so, we are in fact funding constrained. I recall (earlier?) conversations about funding not being a constraint, rather the real constraint is that there aren’t good places to spend such money.
Also relevant is that this is perhaps an easier sell to prospective donors then something more wacky. Like, it seems like a safe bet to have a solid impact, and there’s a precedent for providing expert performers with such coaching, so maybe that sort of thing is appealing to prospective donors.
Finally, I recall hearing at some point that in a field like physics, the very top researchers—people like Einstein—have a very disproportionate impact. If so, I’d think that it’s at least pretty plausible that something similar is true in the field of AI alignment. And if it is, then it’d probably make sense to spend time 1) figuring out who the Einsteins are and then 2) investing in them and doing what we can to maximize their impact.
I’ve been doing Quantified Intuitions’ Estimation Game every month. I really enjoy it. A big thing I’ve learned from it is the instinct to think in terms of orders of magnitude.
Well, not necessarily orders of magnitude, but something similar. For example, a friend just asked me about building a little web app calculator to provide better handicaps in golf scrambles. In the past I’d get a little overwhelmed thinking about how much time such a project would take and default to saying no. But this time I noticed myself approaching it differently.
Will it take minutes? Eh, probably not. Hours? Possibly, but seems a little optimistic. Days? Yeah, seems about right. Weeks? Eh, possibly, but even with the planning fallacy, I’d be surprised. Months? No, it won’t take that long. Years. No way.
With this approach I can figure out the right ballpark very quickly. It’s helpful.