Thanks so much for making this!
I’m hopeful this sort of dataset will grow over time as new sources come about.
In particular, I’d nominate adding MLSN (https://www.alignmentforum.org/posts/R39tGLeETfCZJ4FoE/mlsn-4-many-new-interpretability-papers-virtual-logit) to the list of newsletters in the future.
Sometimes I get asked by intelligent people I trust in other fields, “what’s up with AI x risk?”—and I think at least part of it unpacks to this: Why don’t more people believe in / take seriously AI x-risk?
I think that is actually a pretty reasonable question. I think two follow-ups are worthwhile and I don’t know of good citations / don’t know if they exist:
a sociological/anthropological/psychological/etc study of what’s going on in people who are familiar with the ideas/reasonings of AI x-risk, but decide not to take it seriously / don’t believe it. I expect in-depth interviews would be great here.
we should probably just write up as many obvious things ourselves up front.
The latter one I can take a stab at here. Taking the perspective of someone who might be interviewed for the former:
historically, ignoring anyone that says “the end of the world is near” has been a great heuristic
very little of the public intellectual sphere engages with the topic
the public intellectual sphere that does in engages is disproportionately meme lords
most of the writings about this are exceptionally confusing and jargon-laden
there’s no college courses on this / it doesn’t have the trappings of a legitimate field
it feels a bit like a Pascal’s mugging—at the very least i’m not really prepared to try to think about actions/events with near-infinite consequences
people have been similarly doom-y about other technologies and so far the world turned out fine
we have other existential catastrophes looming (climate change, etc) that are already well understood and scientifically supported, so our efforts are better put on that than this confusing hodge-podge
this field doesn’t seem very diverse and seems to be a bit monocultural
this field doesn’t seem to have a deep/thorough understanding of all of the ways technology is affecting people’s lives negatively today
it seems weird to care about future people when there are present people suffering
I see a lot of public disagreement about whether or not AGI is even real, which makes the risk arguments feel much less trustworthy to me
I think i’m going to stop for now, but I wish there was a nice high-quality organization of these. At the very least, having the steel-version of them seems good to have around, in part as an “epistemic hygiene” thing.