Build up general altruistic capacities through things like the effective altruist movement or GiveWell’s investigation of catastrophic risks
I read every blog post they put out.
Invest money in an investment fund for the future which can invest more [...] when there are better opportunities
I figure I can use my retirement savings for this.
(recalling that most of the value of MIRI in your model comes from major institutions being collectively foolish or ignorant regarding AI going forward)
I thought it came from them being collectively foolish or ignorant regarding Friendliness rather than AGI.
Prediction markets, meta-research, and other institutional changes
Meh. Sounds like Lean Six Sigma or some other buzzword business process improvement plan.
Work like Bostrom’s
Luckily, Bostrom is already doing work like Bostrom’s.
Pursue cognitive enhancement technologies or education methods
Too indirect for my taste.
Find the most effective options for synthetic biology threats
Not very scary compared to AI. Lots of known methods to combat green goo.
I read every blog post they put out.
I figure I can use my retirement savings for this.
I thought it came from them being collectively foolish or ignorant regarding Friendliness rather than AGI.
Meh. Sounds like Lean Six Sigma or some other buzzword business process improvement plan.
Luckily, Bostrom is already doing work like Bostrom’s.
Too indirect for my taste.
Not very scary compared to AI. Lots of known methods to combat green goo.