And I do not think that it is reasonable to come up with a few weak arguments on how intelligence could be dangerous and conclude that their combined probability beats any good argument against one of the premises or in favor of other risks.
I’m not sure who is doing that. Being hit by an asteroid, nuclear war and biological war are other possible potentially major setbacks. Being eaten by machines should also have some probability assigned to it—though it seems pretty challenging to know how to do that. It’s a bit of an unknown unknown. Anyway, this material probably all deserves some funding.
I’m not sure who is doing that. Being hit by an asteroid, nuclear war and biological war are other possible potentially major setbacks. Being eaten by machines should also have some probability assigned to it—though it seems pretty challenging to know how to do that. It’s a bit of an unknown unknown. Anyway, this material probably all deserves some funding.