I would really like to see a “What can you do to help” section.
In fact maybe we should be seriously thinking about concrete ways to allow non mathematicians to also contribute to solving this problem.
What can you do to help? In order of priority, I think the top choices for non-specialists are:
Get more money funneled into MIRI and FHI.
Memetically propagate informed worry about AI risk. Find ways to make the idea stick.
Improve your own and others’ domain-general rationality.
Try to acquire math knowledge and skills that might be in demand by FAI researchers over the next few decades.
If this sequence mix does its job, 2 is simple enough — tell people to begin by sharing the list. (Or other targeted articles and lists, depending on target audience.)
3 is a relatively easy sell, and the primary way I expect to contribute.
4 is quite difficult and risky at this stage.
1 is hard to optimize at the meta level because good signaling is hard. Our default method for beginning to combat high-level biases — write a Sequence focused on this specific issue to bring it to people’s attentions — is unusually tricky to pull off here. Something off-site and independent — a petition, signed by lots of smart people, telling everyone that AI risk is important? an impassioned personal letter by a third party, briefly and shibbolethlessly laying out the Effective Altruism case for MIRI? — might be more effective.
I would really like to see a “What can you do to help” section. In fact maybe we should be seriously thinking about concrete ways to allow non mathematicians to also contribute to solving this problem.
What can you do to help? In order of priority, I think the top choices for non-specialists are:
Get more money funneled into MIRI and FHI.
Memetically propagate informed worry about AI risk. Find ways to make the idea stick.
Improve your own and others’ domain-general rationality.
Try to acquire math knowledge and skills that might be in demand by FAI researchers over the next few decades.
If this sequence mix does its job, 2 is simple enough — tell people to begin by sharing the list. (Or other targeted articles and lists, depending on target audience.)
3 is a relatively easy sell, and the primary way I expect to contribute.
4 is quite difficult and risky at this stage.
1 is hard to optimize at the meta level because good signaling is hard. Our default method for beginning to combat high-level biases — write a Sequence focused on this specific issue to bring it to people’s attentions — is unusually tricky to pull off here. Something off-site and independent — a petition, signed by lots of smart people, telling everyone that AI risk is important? an impassioned personal letter by a third party, briefly and shibbolethlessly laying out the Effective Altruism case for MIRI? — might be more effective.