I want to have a dialog about what’s true, at the level of piece-by-piece reasoning and piece-by-piece causes. I appreciate that you Rob are trying to do this; “pedantry” as you put it is great, and seems to me to be a huge chunk of why LW is a better place to sort some things out than is most of the internet.
I’m a bit confused that you call it “pedantry”, and that you talk of my post as trying to push the pendulum in a particular way, and “people trying to counter burnout,” and whether this style of post “works” for others. The guess I’m forming, as I read your (Rob’s) comment and to a lesser extent the other comments, is that a bunch of people took my post as a general rallying cry against burnout, and felt it necessary to upvote my post, or to endorse my post, because they personally wish to take a stand against burnout. Does something like that seem right/wrong to anyone? (I want to know.)
I… don’t want that, although I may have done things in my post to encourage it anyhow, without consciously paying attention. But if we have rallying cries, we won’t have the kind of shared unfiltered reasoning that someone wanting truth can actually update on.
I’m in general pretty interested in strategies anyone has for having honest, gritty, mechanism-by-mechanism discussion near a Sacred Value. “Don’t burn people out” is arguably a Sacred Value, such that it’ll be hard to have open conversation near it in which all the pedantry is shared in all the directions. I’d love thoughts on how to do it anyhow.
I want to have a dialog about what’s true, at the level of piece-by-piece reasoning and piece-by-piece causes. I appreciate that you Rob are trying to do this; “pedantry” as you put it is great, and seems to me to be a huge chunk of why LW is a better place to sort some things out than is most of the internet.
Yay! I basically agree. The reason I called it “pedantry” was because I said it even though (a) I thought you already believed it (and were just speaking imprecisely / momentarily focusing on other things), (b) it’s an obvious observation that a lot of LWers already have cached, and (c) it felt tangential to the point you were making. So I wanted to flag it as a change of topic inspired by your word choice, rather than as ordinary engagement with the argument I took you to be making.
and that you talk of my post as trying to push the pendulum in a particular way, and “people trying to counter burnout,” and whether this style of post “works” for others.
I think I came to the post with a long-term narrative (which may have nothing to do with the post):
There are a bunch of (Berkeley-ish?) memes in the water related to radical self-acceptance, being kind to yourself, being very wary of working too hard, staying grounded in ordinary day-to-day life, being super skeptical and cautious around Things Claiming To Be Really Important and around moralizing, etc.
I think these are extremely important and valuable memes that LW would do well to explore, discuss, and absorb much more than it already has. I’ve found them personally extremely valuable, and a lot of my favorite blog posts to send to new EAs/rats make points like ‘be cautious around things that make big moralistic demands of you’, etc.
But I also think that these kinds of posts are often presented in ways that compete against the high value I (genuinely, thoughtfully) place on the long-term future, and the high probability I (genuinely, thoughtfully) place on AI killing me and my loved ones, as though I need to choose between the “chill grounded happy self-loving unworried” aesthetic or the “working really hard to try to solve x-risk” aesthetic.
This makes me very wary, especially insofar as it isn’t making an explicit argument against x-risk stuff, but is just sort of vaguely associating not-worrying-so-much-about-human-extinction with nice-sounding words like ‘healthy’, ‘grounded’, ‘relaxed’, etc. If these posts spent more time explicitly arguing for their preferred virtues and for why those virtues imply policy X versus policy Y, rather than relying on connotation and implicature to give their arguments force, my current objection would basically go away.
If more of the “self-acceptance, be kind to yourself, be vary wary of working too hard, etc.” posts were more explicit about making space for possibilities like ‘OK, but my best self really does care overwhelmingly more about x-risk stuff than everything else’ and/or ‘OK, but making huge life-changes to try to prevent human extinction really is the psychologically healthiest option for me’, I would feel less suspicious that some of these posts are doing the dance wrong, losing sight of the fact that both magisteria are real, are part of human life.
I may have been primed to interpret this post in those terms too much, because I perceived it to be a reaction to Eliezer’s recent doomy-sounding blog posts (and people worrying about AI more than usual recently because of that, plus ML news, plus various complicated social dynamics), trying to prevent the community from ‘going too far’ in certain directions.
I think the post is basically good and successful at achieving that goal, and I think it’s a very good goal. I expect to link to the OP post a lot in the coming months. But it sounds like I may be imposing context on the post that isn’t the way you were thinking about it while writing it.
I may have been primed to interpret this post in those terms too much, because I perceived it to be a reaction to Eliezer’s recent doomy-sounding blog posts (and people worrying about AI more than usual recently because of that, plus ML news, plus various complicated social dynamics), trying to prevent the community from ‘going too far’ in certain directions. … But it sounds like I may be imposing context on the post that isn’t the way you were thinking about it while writing it.
Oh, yeah, maybe. I was not consciously responding to that. I was consciously responding to a thing that’s been bothering me quite a bit about EA for ~5 or more years, which is that there’s not enough serious hobbies around here IMO, and also people often report losing the ability to enjoy hanging out with friends, especially friends who aren’t in these circles, and just enjoying one anothers’ company while doing nothing, e.g. at the beach with some beers on a Saturday. (Lots of people tell me they try allocating days to this, it isn’t about the time, it’s about an acquired inability to enter certain modes.)
Thanks for clarifying this though, that makes sense.
I have some other almost-written blog posts that’re also about trying to restore access to “hanging out with friends enjoying people” mode and “serious hobbies” mode, that I hope to maybe post in the next couple weeks.
Back in ~2008, I sat around with some others trying to figure out: if we’re successful in getting a lot of people involved in AI safety—what can we hope to see at different times? And now it’s 2022. In terms of “there’ll be a lot of dollars people are up for spending on safety”, we’re basically “hitting my highest 2008 hopes”. In terms of “there’ll be a lot of people who care”, we’re… less good than I was hoping for, but certainly better than I was expecting. “Hitting my 2008 ‘pretty good’ level.” In terms of “and those people who care will be broad and varied and trying their hands at making movies and doing varied kinds of science and engineering research and learning all about the world while keeping their eyes open for clues about the AI risk conundrum, and being ready to act when a hopeful possibility comes up” we’re doing less well compared to my 2008 hopes. I want to know why and how to unblock it.
In terms of “and those people who care will be broad and varied and trying their hands at making movies and doing varied kinds of science and engineering research and learning all about the world while keeping their eyes open for clues about the AI risk conundrum, and being ready to act when a hopeful possibility comes up” we’re doing less well compared to my 2008 hopes. I want to know why and how to unblock it.
I think to the extent that people are failing to be interesting in all the ways you’d hoped they would be, it’s because being interesting in those ways seems to them to have greater costs than benefits. If you want people to see the benefits of being interesting as outweighing the costs, you should make arguments to help them improve their causal models of the costs, and to improve their causal models of the benefits, and to compare the latter to the former. (E.g., what’s the causal pathway by which an hour of thinking about Egyptology or repairing motorcycles or writing fanfic ends up having, not just positive expected usefulness, but higher expected usefulness at the margin than an hour of thinking about AI risk?) But you haven’t seemed very interested in explicitly building out this kind of argument, and I don’t understand why that isn’t at the top of your list of strategies to try.
I think of this in terms of personal vs. civilization-scale value loci distinction. Personal-scale values, applying to individual modern human minds, speaking of those minds, might hold status quo anchoring sacred and dislike presence of excessive awareness of disruptive possible changes. While civilization-scale values, even as they are facilitated by individuals, do care about accurate understanding of reality regardless of what it says.
People shouldn’t move too far towards becoming decision theoretic agents, even if they could, other than for channeling civilization. The latter is currently a necessity (that’s very dangerous to neglect), but it’s not fundamentally a necessity. What people should move towards is a more complicated question with some different answer (which does probably include more clarity in thinking than is currently the norm or physiologically possible, but still). People are vessels of value, civilization is its custodian. These different roles call for different shapes of cognition.
In this model, it’s appropriate / morally-healthy / intrinsically-valuable for people to live more fictional lives (as they prefer) while civilization as a whole is awake, and both personal-scale values and civilization-scale values agree on this point.
I want to have a dialog about what’s true, at the level of piece-by-piece reasoning and piece-by-piece causes. I appreciate that you Rob are trying to do this; “pedantry” as you put it is great, and seems to me to be a huge chunk of why LW is a better place to sort some things out than is most of the internet.
I’m a bit confused that you call it “pedantry”, and that you talk of my post as trying to push the pendulum in a particular way, and “people trying to counter burnout,” and whether this style of post “works” for others. The guess I’m forming, as I read your (Rob’s) comment and to a lesser extent the other comments, is that a bunch of people took my post as a general rallying cry against burnout, and felt it necessary to upvote my post, or to endorse my post, because they personally wish to take a stand against burnout. Does something like that seem right/wrong to anyone? (I want to know.)
I… don’t want that, although I may have done things in my post to encourage it anyhow, without consciously paying attention. But if we have rallying cries, we won’t have the kind of shared unfiltered reasoning that someone wanting truth can actually update on.
I’m in general pretty interested in strategies anyone has for having honest, gritty, mechanism-by-mechanism discussion near a Sacred Value. “Don’t burn people out” is arguably a Sacred Value, such that it’ll be hard to have open conversation near it in which all the pedantry is shared in all the directions. I’d love thoughts on how to do it anyhow.
Yay! I basically agree. The reason I called it “pedantry” was because I said it even though (a) I thought you already believed it (and were just speaking imprecisely / momentarily focusing on other things), (b) it’s an obvious observation that a lot of LWers already have cached, and (c) it felt tangential to the point you were making. So I wanted to flag it as a change of topic inspired by your word choice, rather than as ordinary engagement with the argument I took you to be making.
I think I came to the post with a long-term narrative (which may have nothing to do with the post):
There are a bunch of (Berkeley-ish?) memes in the water related to radical self-acceptance, being kind to yourself, being very wary of working too hard, staying grounded in ordinary day-to-day life, being super skeptical and cautious around Things Claiming To Be Really Important and around moralizing, etc.
I think these are extremely important and valuable memes that LW would do well to explore, discuss, and absorb much more than it already has. I’ve found them personally extremely valuable, and a lot of my favorite blog posts to send to new EAs/rats make points like ‘be cautious around things that make big moralistic demands of you’, etc.
But I also think that these kinds of posts are often presented in ways that compete against the high value I (genuinely, thoughtfully) place on the long-term future, and the high probability I (genuinely, thoughtfully) place on AI killing me and my loved ones, as though I need to choose between the “chill grounded happy self-loving unworried” aesthetic or the “working really hard to try to solve x-risk” aesthetic.
This makes me very wary, especially insofar as it isn’t making an explicit argument against x-risk stuff, but is just sort of vaguely associating not-worrying-so-much-about-human-extinction with nice-sounding words like ‘healthy’, ‘grounded’, ‘relaxed’, etc. If these posts spent more time explicitly arguing for their preferred virtues and for why those virtues imply policy X versus policy Y, rather than relying on connotation and implicature to give their arguments force, my current objection would basically go away.
If more of the “self-acceptance, be kind to yourself, be vary wary of working too hard, etc.” posts were more explicit about making space for possibilities like ‘OK, but my best self really does care overwhelmingly more about x-risk stuff than everything else’ and/or ‘OK, but making huge life-changes to try to prevent human extinction really is the psychologically healthiest option for me’, I would feel less suspicious that some of these posts are doing the dance wrong, losing sight of the fact that both magisteria are real, are part of human life.
I may have been primed to interpret this post in those terms too much, because I perceived it to be a reaction to Eliezer’s recent doomy-sounding blog posts (and people worrying about AI more than usual recently because of that, plus ML news, plus various complicated social dynamics), trying to prevent the community from ‘going too far’ in certain directions.
I think the post is basically good and successful at achieving that goal, and I think it’s a very good goal. I expect to link to the OP post a lot in the coming months. But it sounds like I may be imposing context on the post that isn’t the way you were thinking about it while writing it.
Oh, yeah, maybe. I was not consciously responding to that. I was consciously responding to a thing that’s been bothering me quite a bit about EA for ~5 or more years, which is that there’s not enough serious hobbies around here IMO, and also people often report losing the ability to enjoy hanging out with friends, especially friends who aren’t in these circles, and just enjoying one anothers’ company while doing nothing, e.g. at the beach with some beers on a Saturday. (Lots of people tell me they try allocating days to this, it isn’t about the time, it’s about an acquired inability to enter certain modes.)
Thanks for clarifying this though, that makes sense.
I have some other almost-written blog posts that’re also about trying to restore access to “hanging out with friends enjoying people” mode and “serious hobbies” mode, that I hope to maybe post in the next couple weeks.
Back in ~2008, I sat around with some others trying to figure out: if we’re successful in getting a lot of people involved in AI safety—what can we hope to see at different times? And now it’s 2022. In terms of “there’ll be a lot of dollars people are up for spending on safety”, we’re basically “hitting my highest 2008 hopes”. In terms of “there’ll be a lot of people who care”, we’re… less good than I was hoping for, but certainly better than I was expecting. “Hitting my 2008 ‘pretty good’ level.” In terms of “and those people who care will be broad and varied and trying their hands at making movies and doing varied kinds of science and engineering research and learning all about the world while keeping their eyes open for clues about the AI risk conundrum, and being ready to act when a hopeful possibility comes up” we’re doing less well compared to my 2008 hopes. I want to know why and how to unblock it.
I think to the extent that people are failing to be interesting in all the ways you’d hoped they would be, it’s because being interesting in those ways seems to them to have greater costs than benefits. If you want people to see the benefits of being interesting as outweighing the costs, you should make arguments to help them improve their causal models of the costs, and to improve their causal models of the benefits, and to compare the latter to the former. (E.g., what’s the causal pathway by which an hour of thinking about Egyptology or repairing motorcycles or writing fanfic ends up having, not just positive expected usefulness, but higher expected usefulness at the margin than an hour of thinking about AI risk?) But you haven’t seemed very interested in explicitly building out this kind of argument, and I don’t understand why that isn’t at the top of your list of strategies to try.
I think of this in terms of personal vs. civilization-scale value loci distinction. Personal-scale values, applying to individual modern human minds, speaking of those minds, might hold status quo anchoring sacred and dislike presence of excessive awareness of disruptive possible changes. While civilization-scale values, even as they are facilitated by individuals, do care about accurate understanding of reality regardless of what it says.
People shouldn’t move too far towards becoming decision theoretic agents, even if they could, other than for channeling civilization. The latter is currently a necessity (that’s very dangerous to neglect), but it’s not fundamentally a necessity. What people should move towards is a more complicated question with some different answer (which does probably include more clarity in thinking than is currently the norm or physiologically possible, but still). People are vessels of value, civilization is its custodian. These different roles call for different shapes of cognition.
In this model, it’s appropriate / morally-healthy / intrinsically-valuable for people to live more fictional lives (as they prefer) while civilization as a whole is awake, and both personal-scale values and civilization-scale values agree on this point.