Do you have any argument for why the SIAI is unlikely to be the best other than the sheer size of the option space?
This is a community where a lot of the members have put substantial thought into locating the optimum in that option space, and have well developed reasons for their conclusion. Further, there are not a lot of real charities clustered around that optimum. Simply claiming a low prior probability of picking the right charity is not a strong argument here. If you have additional arguments, I suggest you explain them further.
(I’ll also add that I personally arrived at the conclusion that an SIAI-like charity would be the optimal recipient for charitable donations before learning that it existed, or encountering Overcoming Bias, Less Wrong, or any of Eliezer’s writings, and in fact can completely discount the possibility that my rationality in reaching my conclusion was corrupted by an aura effect around anyone I considered to be smarter or more moral than myself.)
Do you have any argument for why the SIAI is unlikely to be the best other than the sheer size of the option space?
This is a community where a lot of the members have put substantial thought into locating the optimum in that option space, and have well developed reasons for their conclusion. Further, there are not a lot of real charities clustered around that optimum. Simply claiming a low prior probability of picking the right charity is not a strong argument here. If you have additional arguments, I suggest you explain them further.
(I’ll also add that I personally arrived at the conclusion that an SIAI-like charity would be the optimal recipient for charitable donations before learning that it existed, or encountering Overcoming Bias, Less Wrong, or any of Eliezer’s writings, and in fact can completely discount the possibility that my rationality in reaching my conclusion was corrupted by an aura effect around anyone I considered to be smarter or more moral than myself.)