I may have been primed to interpret this post in those terms too much, because I perceived it to be a reaction to Eliezer’s recent doomy-sounding blog posts (and people worrying about AI more than usual recently because of that, plus ML news, plus various complicated social dynamics), trying to prevent the community from ‘going too far’ in certain directions. … But it sounds like I may be imposing context on the post that isn’t the way you were thinking about it while writing it.
Oh, yeah, maybe. I was not consciously responding to that. I was consciously responding to a thing that’s been bothering me quite a bit about EA for ~5 or more years, which is that there’s not enough serious hobbies around here IMO, and also people often report losing the ability to enjoy hanging out with friends, especially friends who aren’t in these circles, and just enjoying one anothers’ company while doing nothing, e.g. at the beach with some beers on a Saturday. (Lots of people tell me they try allocating days to this, it isn’t about the time, it’s about an acquired inability to enter certain modes.)
Thanks for clarifying this though, that makes sense.
I have some other almost-written blog posts that’re also about trying to restore access to “hanging out with friends enjoying people” mode and “serious hobbies” mode, that I hope to maybe post in the next couple weeks.
Back in ~2008, I sat around with some others trying to figure out: if we’re successful in getting a lot of people involved in AI safety—what can we hope to see at different times? And now it’s 2022. In terms of “there’ll be a lot of dollars people are up for spending on safety”, we’re basically “hitting my highest 2008 hopes”. In terms of “there’ll be a lot of people who care”, we’re… less good than I was hoping for, but certainly better than I was expecting. “Hitting my 2008 ‘pretty good’ level.” In terms of “and those people who care will be broad and varied and trying their hands at making movies and doing varied kinds of science and engineering research and learning all about the world while keeping their eyes open for clues about the AI risk conundrum, and being ready to act when a hopeful possibility comes up” we’re doing less well compared to my 2008 hopes. I want to know why and how to unblock it.
In terms of “and those people who care will be broad and varied and trying their hands at making movies and doing varied kinds of science and engineering research and learning all about the world while keeping their eyes open for clues about the AI risk conundrum, and being ready to act when a hopeful possibility comes up” we’re doing less well compared to my 2008 hopes. I want to know why and how to unblock it.
I think to the extent that people are failing to be interesting in all the ways you’d hoped they would be, it’s because being interesting in those ways seems to them to have greater costs than benefits. If you want people to see the benefits of being interesting as outweighing the costs, you should make arguments to help them improve their causal models of the costs, and to improve their causal models of the benefits, and to compare the latter to the former. (E.g., what’s the causal pathway by which an hour of thinking about Egyptology or repairing motorcycles or writing fanfic ends up having, not just positive expected usefulness, but higher expected usefulness at the margin than an hour of thinking about AI risk?) But you haven’t seemed very interested in explicitly building out this kind of argument, and I don’t understand why that isn’t at the top of your list of strategies to try.
Oh, yeah, maybe. I was not consciously responding to that. I was consciously responding to a thing that’s been bothering me quite a bit about EA for ~5 or more years, which is that there’s not enough serious hobbies around here IMO, and also people often report losing the ability to enjoy hanging out with friends, especially friends who aren’t in these circles, and just enjoying one anothers’ company while doing nothing, e.g. at the beach with some beers on a Saturday. (Lots of people tell me they try allocating days to this, it isn’t about the time, it’s about an acquired inability to enter certain modes.)
Thanks for clarifying this though, that makes sense.
I have some other almost-written blog posts that’re also about trying to restore access to “hanging out with friends enjoying people” mode and “serious hobbies” mode, that I hope to maybe post in the next couple weeks.
Back in ~2008, I sat around with some others trying to figure out: if we’re successful in getting a lot of people involved in AI safety—what can we hope to see at different times? And now it’s 2022. In terms of “there’ll be a lot of dollars people are up for spending on safety”, we’re basically “hitting my highest 2008 hopes”. In terms of “there’ll be a lot of people who care”, we’re… less good than I was hoping for, but certainly better than I was expecting. “Hitting my 2008 ‘pretty good’ level.” In terms of “and those people who care will be broad and varied and trying their hands at making movies and doing varied kinds of science and engineering research and learning all about the world while keeping their eyes open for clues about the AI risk conundrum, and being ready to act when a hopeful possibility comes up” we’re doing less well compared to my 2008 hopes. I want to know why and how to unblock it.
I think to the extent that people are failing to be interesting in all the ways you’d hoped they would be, it’s because being interesting in those ways seems to them to have greater costs than benefits. If you want people to see the benefits of being interesting as outweighing the costs, you should make arguments to help them improve their causal models of the costs, and to improve their causal models of the benefits, and to compare the latter to the former. (E.g., what’s the causal pathway by which an hour of thinking about Egyptology or repairing motorcycles or writing fanfic ends up having, not just positive expected usefulness, but higher expected usefulness at the margin than an hour of thinking about AI risk?) But you haven’t seemed very interested in explicitly building out this kind of argument, and I don’t understand why that isn’t at the top of your list of strategies to try.