I think it’s admirable to say things like “I don’t want to [do the thing that this community holds as near-gospel as a good thing to do.]” I also think the community should take it seriously that anyone feels like they’re punished for being intellectually honest, and in general I’m sad that it seems like your interactions with EAs/rats about AI have been unpleasant.
That said...I do want to push back on basically everything in this post and encourage you and others in this position to spend some time seeing if you agree or disagree with the AI stuff.
Assuming that you think you’d look into it in a reasonable way, then you’d be much more likely to reach a doomy conclusion if it were actually true. If it were true, it would be very much in your interest — altruistically and personally — to believe it. In general, it’s just pretty useful to have more information about things that could completely transform your life. If you might have a terminal illness, doesn’t it make sense to find out soon so you can act appropriately even if it’s totally untreatable?
I also think there are many things for non-technical people to do on AI risk! For example, you could start trying to work on the problem, or if you think it’s just totally hopeless w/r/t your own work, you could work less hard and save less for retirement so you can spend more time and money on things you value now.
For the “what if I decide it’s not a big deal conclusion”:
For points #1 through #3, I’m basically just surprised that you don’t already experience this with the take “I don’t want to learn about or talk about AI” such that it would get worse if your take was “I have a considered view that AI x-risk is low”! To be honest and a little blunt, I do judge people a bit when they have bad reasoning either for high or low levels of x-risk, but I’m pretty sure I judge them a lot more positively when they’ve made a good-faith effort at figuring it out.
For point #3 and #4, idk, Holden, Joe Carlsmith, Rob Long, and possibly I (among others) are all people who have (hopefully) contributed something valuable to the fight against AI risk with social science or humanities backgrounds, so I don’t think this means you wouldn’t be persuasive, and it seems incredibly valuable for the community if more people think things through and come to this opinion. The consensus that AI safety is a huge deal currently means we have hundreds of millions of dollars, hundreds of people (many of whom are anxious and/or depressed because of this consensus), and dozens of orgs focused on it. Imagine if this is wrong — we’d be inflicting so much damage!
Assuming that you think you’d look into it in a reasonable way, then you’d be much more likely to reach a doomy conclusion if it were actually true.
This is too optimistic assumption. On one hand, we have Kirsten’s ability to do AI research. On the other hand, we have all the social pressure that Kirsten complains about. You seem to assume that the former is greater than the latter, which may or may not be true (no offense meant).
An analogy with religion is telling someone to make an independent research about the historical truth about Jesus. In theory, that should work. In practice… maybe that person has no special talent for doing historical research; plus there is always the knowledge in the background that arriving at the incorrect answer would cost them all their current friends anyway (which I hope does not work the same with EAs, but the people who can’t stop talking about the doom now probably won’t be able to stop talking about it even if Kirsten tells them “I have done my research, and I disagree”).
My response to both paragraphs is that the relevant counterfactual is “not looking into/talking about AI risks.” I claim that there is at least as much social pressure from the community to take AI risk seriously and to talk about it as there is to reach a pessimistic conclusion, and that people are very unlikely to lose “all their current friends” by arriving at an “incorrect” conclusion if their current friends are already fine with the person not having any view at all on AI risks.
I think it’s admirable to say things like “I don’t want to [do the thing that this community holds as near-gospel as a good thing to do.]” I also think the community should take it seriously that anyone feels like they’re punished for being intellectually honest, and in general I’m sad that it seems like your interactions with EAs/rats about AI have been unpleasant.
That said...I do want to push back on basically everything in this post and encourage you and others in this position to spend some time seeing if you agree or disagree with the AI stuff.
Assuming that you think you’d look into it in a reasonable way, then you’d be much more likely to reach a doomy conclusion if it were actually true. If it were true, it would be very much in your interest — altruistically and personally — to believe it. In general, it’s just pretty useful to have more information about things that could completely transform your life. If you might have a terminal illness, doesn’t it make sense to find out soon so you can act appropriately even if it’s totally untreatable?
I also think there are many things for non-technical people to do on AI risk! For example, you could start trying to work on the problem, or if you think it’s just totally hopeless w/r/t your own work, you could work less hard and save less for retirement so you can spend more time and money on things you value now.
For the “what if I decide it’s not a big deal conclusion”:
For points #1 through #3, I’m basically just surprised that you don’t already experience this with the take “I don’t want to learn about or talk about AI” such that it would get worse if your take was “I have a considered view that AI x-risk is low”! To be honest and a little blunt, I do judge people a bit when they have bad reasoning either for high or low levels of x-risk, but I’m pretty sure I judge them a lot more positively when they’ve made a good-faith effort at figuring it out.
For point #3 and #4, idk, Holden, Joe Carlsmith, Rob Long, and possibly I (among others) are all people who have (hopefully) contributed something valuable to the fight against AI risk with social science or humanities backgrounds, so I don’t think this means you wouldn’t be persuasive, and it seems incredibly valuable for the community if more people think things through and come to this opinion. The consensus that AI safety is a huge deal currently means we have hundreds of millions of dollars, hundreds of people (many of whom are anxious and/or depressed because of this consensus), and dozens of orgs focused on it. Imagine if this is wrong — we’d be inflicting so much damage!
This is too optimistic assumption. On one hand, we have Kirsten’s ability to do AI research. On the other hand, we have all the social pressure that Kirsten complains about. You seem to assume that the former is greater than the latter, which may or may not be true (no offense meant).
An analogy with religion is telling someone to make an independent research about the historical truth about Jesus. In theory, that should work. In practice… maybe that person has no special talent for doing historical research; plus there is always the knowledge in the background that arriving at the incorrect answer would cost them all their current friends anyway (which I hope does not work the same with EAs, but the people who can’t stop talking about the doom now probably won’t be able to stop talking about it even if Kirsten tells them “I have done my research, and I disagree”).
This is exactly how I feel; thank you for articulating it so well!
My response to both paragraphs is that the relevant counterfactual is “not looking into/talking about AI risks.” I claim that there is at least as much social pressure from the community to take AI risk seriously and to talk about it as there is to reach a pessimistic conclusion, and that people are very unlikely to lose “all their current friends” by arriving at an “incorrect” conclusion if their current friends are already fine with the person not having any view at all on AI risks.
Thanks, this is pretty persuasive and worth thinking about (so I will think about it!)