I want to have a dialog about what’s true, at the level of piece-by-piece reasoning and piece-by-piece causes. I appreciate that you Rob are trying to do this; “pedantry” as you put it is great, and seems to me to be a huge chunk of why LW is a better place to sort some things out than is most of the internet.
Yay! I basically agree. The reason I called it “pedantry” was because I said it even though (a) I thought you already believed it (and were just speaking imprecisely / momentarily focusing on other things), (b) it’s an obvious observation that a lot of LWers already have cached, and (c) it felt tangential to the point you were making. So I wanted to flag it as a change of topic inspired by your word choice, rather than as ordinary engagement with the argument I took you to be making.
and that you talk of my post as trying to push the pendulum in a particular way, and “people trying to counter burnout,” and whether this style of post “works” for others.
I think I came to the post with a long-term narrative (which may have nothing to do with the post):
There are a bunch of (Berkeley-ish?) memes in the water related to radical self-acceptance, being kind to yourself, being very wary of working too hard, staying grounded in ordinary day-to-day life, being super skeptical and cautious around Things Claiming To Be Really Important and around moralizing, etc.
I think these are extremely important and valuable memes that LW would do well to explore, discuss, and absorb much more than it already has. I’ve found them personally extremely valuable, and a lot of my favorite blog posts to send to new EAs/rats make points like ‘be cautious around things that make big moralistic demands of you’, etc.
But I also think that these kinds of posts are often presented in ways that compete against the high value I (genuinely, thoughtfully) place on the long-term future, and the high probability I (genuinely, thoughtfully) place on AI killing me and my loved ones, as though I need to choose between the “chill grounded happy self-loving unworried” aesthetic or the “working really hard to try to solve x-risk” aesthetic.
This makes me very wary, especially insofar as it isn’t making an explicit argument against x-risk stuff, but is just sort of vaguely associating not-worrying-so-much-about-human-extinction with nice-sounding words like ‘healthy’, ‘grounded’, ‘relaxed’, etc. If these posts spent more time explicitly arguing for their preferred virtues and for why those virtues imply policy X versus policy Y, rather than relying on connotation and implicature to give their arguments force, my current objection would basically go away.
If more of the “self-acceptance, be kind to yourself, be vary wary of working too hard, etc.” posts were more explicit about making space for possibilities like ‘OK, but my best self really does care overwhelmingly more about x-risk stuff than everything else’ and/or ‘OK, but making huge life-changes to try to prevent human extinction really is the psychologically healthiest option for me’, I would feel less suspicious that some of these posts are doing the dance wrong, losing sight of the fact that both magisteria are real, are part of human life.
I may have been primed to interpret this post in those terms too much, because I perceived it to be a reaction to Eliezer’s recent doomy-sounding blog posts (and people worrying about AI more than usual recently because of that, plus ML news, plus various complicated social dynamics), trying to prevent the community from ‘going too far’ in certain directions.
I think the post is basically good and successful at achieving that goal, and I think it’s a very good goal. I expect to link to the OP post a lot in the coming months. But it sounds like I may be imposing context on the post that isn’t the way you were thinking about it while writing it.
I may have been primed to interpret this post in those terms too much, because I perceived it to be a reaction to Eliezer’s recent doomy-sounding blog posts (and people worrying about AI more than usual recently because of that, plus ML news, plus various complicated social dynamics), trying to prevent the community from ‘going too far’ in certain directions. … But it sounds like I may be imposing context on the post that isn’t the way you were thinking about it while writing it.
Oh, yeah, maybe. I was not consciously responding to that. I was consciously responding to a thing that’s been bothering me quite a bit about EA for ~5 or more years, which is that there’s not enough serious hobbies around here IMO, and also people often report losing the ability to enjoy hanging out with friends, especially friends who aren’t in these circles, and just enjoying one anothers’ company while doing nothing, e.g. at the beach with some beers on a Saturday. (Lots of people tell me they try allocating days to this, it isn’t about the time, it’s about an acquired inability to enter certain modes.)
Thanks for clarifying this though, that makes sense.
I have some other almost-written blog posts that’re also about trying to restore access to “hanging out with friends enjoying people” mode and “serious hobbies” mode, that I hope to maybe post in the next couple weeks.
Back in ~2008, I sat around with some others trying to figure out: if we’re successful in getting a lot of people involved in AI safety—what can we hope to see at different times? And now it’s 2022. In terms of “there’ll be a lot of dollars people are up for spending on safety”, we’re basically “hitting my highest 2008 hopes”. In terms of “there’ll be a lot of people who care”, we’re… less good than I was hoping for, but certainly better than I was expecting. “Hitting my 2008 ‘pretty good’ level.” In terms of “and those people who care will be broad and varied and trying their hands at making movies and doing varied kinds of science and engineering research and learning all about the world while keeping their eyes open for clues about the AI risk conundrum, and being ready to act when a hopeful possibility comes up” we’re doing less well compared to my 2008 hopes. I want to know why and how to unblock it.
In terms of “and those people who care will be broad and varied and trying their hands at making movies and doing varied kinds of science and engineering research and learning all about the world while keeping their eyes open for clues about the AI risk conundrum, and being ready to act when a hopeful possibility comes up” we’re doing less well compared to my 2008 hopes. I want to know why and how to unblock it.
I think to the extent that people are failing to be interesting in all the ways you’d hoped they would be, it’s because being interesting in those ways seems to them to have greater costs than benefits. If you want people to see the benefits of being interesting as outweighing the costs, you should make arguments to help them improve their causal models of the costs, and to improve their causal models of the benefits, and to compare the latter to the former. (E.g., what’s the causal pathway by which an hour of thinking about Egyptology or repairing motorcycles or writing fanfic ends up having, not just positive expected usefulness, but higher expected usefulness at the margin than an hour of thinking about AI risk?) But you haven’t seemed very interested in explicitly building out this kind of argument, and I don’t understand why that isn’t at the top of your list of strategies to try.
Yay! I basically agree. The reason I called it “pedantry” was because I said it even though (a) I thought you already believed it (and were just speaking imprecisely / momentarily focusing on other things), (b) it’s an obvious observation that a lot of LWers already have cached, and (c) it felt tangential to the point you were making. So I wanted to flag it as a change of topic inspired by your word choice, rather than as ordinary engagement with the argument I took you to be making.
I think I came to the post with a long-term narrative (which may have nothing to do with the post):
There are a bunch of (Berkeley-ish?) memes in the water related to radical self-acceptance, being kind to yourself, being very wary of working too hard, staying grounded in ordinary day-to-day life, being super skeptical and cautious around Things Claiming To Be Really Important and around moralizing, etc.
I think these are extremely important and valuable memes that LW would do well to explore, discuss, and absorb much more than it already has. I’ve found them personally extremely valuable, and a lot of my favorite blog posts to send to new EAs/rats make points like ‘be cautious around things that make big moralistic demands of you’, etc.
But I also think that these kinds of posts are often presented in ways that compete against the high value I (genuinely, thoughtfully) place on the long-term future, and the high probability I (genuinely, thoughtfully) place on AI killing me and my loved ones, as though I need to choose between the “chill grounded happy self-loving unworried” aesthetic or the “working really hard to try to solve x-risk” aesthetic.
This makes me very wary, especially insofar as it isn’t making an explicit argument against x-risk stuff, but is just sort of vaguely associating not-worrying-so-much-about-human-extinction with nice-sounding words like ‘healthy’, ‘grounded’, ‘relaxed’, etc. If these posts spent more time explicitly arguing for their preferred virtues and for why those virtues imply policy X versus policy Y, rather than relying on connotation and implicature to give their arguments force, my current objection would basically go away.
If more of the “self-acceptance, be kind to yourself, be vary wary of working too hard, etc.” posts were more explicit about making space for possibilities like ‘OK, but my best self really does care overwhelmingly more about x-risk stuff than everything else’ and/or ‘OK, but making huge life-changes to try to prevent human extinction really is the psychologically healthiest option for me’, I would feel less suspicious that some of these posts are doing the dance wrong, losing sight of the fact that both magisteria are real, are part of human life.
I may have been primed to interpret this post in those terms too much, because I perceived it to be a reaction to Eliezer’s recent doomy-sounding blog posts (and people worrying about AI more than usual recently because of that, plus ML news, plus various complicated social dynamics), trying to prevent the community from ‘going too far’ in certain directions.
I think the post is basically good and successful at achieving that goal, and I think it’s a very good goal. I expect to link to the OP post a lot in the coming months. But it sounds like I may be imposing context on the post that isn’t the way you were thinking about it while writing it.
Oh, yeah, maybe. I was not consciously responding to that. I was consciously responding to a thing that’s been bothering me quite a bit about EA for ~5 or more years, which is that there’s not enough serious hobbies around here IMO, and also people often report losing the ability to enjoy hanging out with friends, especially friends who aren’t in these circles, and just enjoying one anothers’ company while doing nothing, e.g. at the beach with some beers on a Saturday. (Lots of people tell me they try allocating days to this, it isn’t about the time, it’s about an acquired inability to enter certain modes.)
Thanks for clarifying this though, that makes sense.
I have some other almost-written blog posts that’re also about trying to restore access to “hanging out with friends enjoying people” mode and “serious hobbies” mode, that I hope to maybe post in the next couple weeks.
Back in ~2008, I sat around with some others trying to figure out: if we’re successful in getting a lot of people involved in AI safety—what can we hope to see at different times? And now it’s 2022. In terms of “there’ll be a lot of dollars people are up for spending on safety”, we’re basically “hitting my highest 2008 hopes”. In terms of “there’ll be a lot of people who care”, we’re… less good than I was hoping for, but certainly better than I was expecting. “Hitting my 2008 ‘pretty good’ level.” In terms of “and those people who care will be broad and varied and trying their hands at making movies and doing varied kinds of science and engineering research and learning all about the world while keeping their eyes open for clues about the AI risk conundrum, and being ready to act when a hopeful possibility comes up” we’re doing less well compared to my 2008 hopes. I want to know why and how to unblock it.
I think to the extent that people are failing to be interesting in all the ways you’d hoped they would be, it’s because being interesting in those ways seems to them to have greater costs than benefits. If you want people to see the benefits of being interesting as outweighing the costs, you should make arguments to help them improve their causal models of the costs, and to improve their causal models of the benefits, and to compare the latter to the former. (E.g., what’s the causal pathway by which an hour of thinking about Egyptology or repairing motorcycles or writing fanfic ends up having, not just positive expected usefulness, but higher expected usefulness at the margin than an hour of thinking about AI risk?) But you haven’t seemed very interested in explicitly building out this kind of argument, and I don’t understand why that isn’t at the top of your list of strategies to try.