There’re other calculations to consider too (edit: and they almost certainly outweigh the torture possibilities)! For instance:
Suppose that if you can give one year of life this year by giving $25 to AMF (Givewell says $3340 to save a child’s life, not counting the other benefits).
If all MIRI does is delay the development of any type of Unfriendly AI, your $25 would need to let MIRI delay that by, ah, 4.3 milliseconds (139 picoyears). With 10% a year exponential future discounting and 100 years before you expect Unfriendly AI to be created if you don’t help MIRI and no population growth, that $25 now needs to give them enough resources to delay UFAI about 31 seconds.
This is true for any project that reduces humanity’s existential risk. AI is just the saddest if it goes wrong, because then it goes wrong for everything in, slightly less than, our light cone.
There’re other calculations to consider too (edit: and they almost certainly outweigh the torture possibilities)! For instance:
Suppose that if you can give one year of life this year by giving $25 to AMF (Givewell says $3340 to save a child’s life, not counting the other benefits).
If all MIRI does is delay the development of any type of Unfriendly AI, your $25 would need to let MIRI delay that by, ah, 4.3 milliseconds (139 picoyears). With 10% a year exponential future discounting and 100 years before you expect Unfriendly AI to be created if you don’t help MIRI and no population growth, that $25 now needs to give them enough resources to delay UFAI about 31 seconds.
This is true for any project that reduces humanity’s existential risk. AI is just the saddest if it goes wrong, because then it goes wrong for everything in, slightly less than, our light cone.