It seems more informative to just look at top (inflation adjusted) karma for 2022 (similar to what habryka noted in the sibling). AI posts in bold.
AGI Ruin: A List of LethalitiesΩ
Where I agree and disagree with EliezerΩ
SimulatorsΩ
What an actually pessimistic containment strategy looks like
Let’s think about slowing down AIΩ
Luck based medicine: my resentful story of becoming a medical miracle
Counter-theses on Sleep
Losing the root for the tree
The Redaction Machine
It Looks Like You’re Trying To Take Over The WorldΩ
(My understanding of) What Everyone in Technical Alignment is Doing and WhyΩ
Counterarguments to the basic AI x-risk caseΩ
It’s Probably Not Lithium
Reflections on six months of fatherhood
chinchilla’s wild implicationsΩ
You Are Not Measuring What You Think You Are Measuring [AI related]
Lies Told To Children
What DALL-E 2 can and cannot do
Staring into the abyss as a core life skill
DeepMind alignment team opinions on AGI ruin argumentsΩ
Accounting For College Costs
A Mechanistic Interpretability Analysis of GrokkingΩ
Models Don’t “Get Reward”Ω
Why I think strong general AI is coming soon
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeoverΩ
Why Agent Foundations? An Overly Abstract ExplanationΩ
MIRI announces new “Death With Dignity” strategy
Beware boasting about non-existent forecasting track records [AI related]
A challenge for AGI organizations, and a challenge for readersΩ
I count 18⁄29 about AI. A few AI posts are technically more general. A few non-AI posts seem to indirectly be about AI.
It seems more informative to just look at top (inflation adjusted) karma for 2022 (similar to what habryka noted in the sibling). AI posts in bold.
AGI Ruin: A List of LethalitiesΩ
Where I agree and disagree with EliezerΩ
SimulatorsΩ
What an actually pessimistic containment strategy looks like
Let’s think about slowing down AIΩ
Luck based medicine: my resentful story of becoming a medical miracle
Counter-theses on Sleep
Losing the root for the tree
The Redaction Machine
It Looks Like You’re Trying To Take Over The WorldΩ
(My understanding of) What Everyone in Technical Alignment is Doing and WhyΩ
Counterarguments to the basic AI x-risk caseΩ
It’s Probably Not Lithium
Reflections on six months of fatherhood
chinchilla’s wild implicationsΩ
You Are Not Measuring What You Think You Are Measuring [AI related]
Lies Told To Children
What DALL-E 2 can and cannot do
Staring into the abyss as a core life skill
DeepMind alignment team opinions on AGI ruin argumentsΩ
Accounting For College Costs
A Mechanistic Interpretability Analysis of GrokkingΩ
Models Don’t “Get Reward”Ω
Why I think strong general AI is coming soon
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeoverΩ
Why Agent Foundations? An Overly Abstract ExplanationΩ
MIRI announces new “Death With Dignity” strategy
Beware boasting about non-existent forecasting track records [AI related]
A challenge for AGI organizations, and a challenge for readersΩ
I count 18⁄29 about AI. A few AI posts are technically more general. A few non-AI posts seem to indirectly be about AI.