Asteroid strikes are very unlikely—so beating them is a really low standard, which IMO, machine intelligence projects do with ease. Funding the area sensibly would help make it happen—by most accounts. Detailed justification is beyond the scope of this comment, though.
Assuming that an asteroid strike prevention program costs no more than a few hundred million dollars, I don’t think that it’s easy to do better to assuage existential risk than funding an asteroid strike prevention program (though it may be possible). I intend to explain why I think it’s so hard to lower existential risk through funding FAI research later on (not sure when, but within a few months).
I’d be interested in hearing your detailed justification. Maybe you can make a string top level posts at some point.
Asteroid strikes are very unlikely—so beating them is a really low standard, which IMO, machine intelligence projects do with ease. Funding the area sensibly would help make it happen—by most accounts. Detailed justification is beyond the scope of this comment, though.
Assuming that an asteroid strike prevention program costs no more than a few hundred million dollars, I don’t think that it’s easy to do better to assuage existential risk than funding an asteroid strike prevention program (though it may be possible). I intend to explain why I think it’s so hard to lower existential risk through funding FAI research later on (not sure when, but within a few months).
I’d be interested in hearing your detailed justification. Maybe you can make a string top level posts at some point.