Any FAI discussions are mindkilling unless they are explicitly conditional on “assuming FOOM is logically possible”. After all, we don’t have enough evidence to bridge the difference in priors, and neither side (AI is a risk/AI is not a risk) explicitly acknowledges that fact (and this problem makes them sides more than partners).
Any FAI discussions are mindkilling unless they are explicitly conditional on “assuming FOOM is logically possible”. After all, we don’t have enough evidence to bridge the difference in priors, and neither side (AI is a risk/AI is not a risk) explicitly acknowledges that fact (and this problem makes them sides more than partners).