Re the failed totalizing worldview, I’d say a lot of the failure comes down mostly not to the philosophical premises being incorrect (with a few exceptions), but rather a combo of underestimating how hard inference is from bare premises, without relying on empirical results, related to computational complexity and a failure to scale down from the idealized reasoner, combined with philosophical progress being mostly unnecessary for Friendly AI.
Which is why I refuse to generalize from Eliezer Yudkowsky and MIRI’s failure to make Friendly AI to all hopes of making Friendly AI failing, or even most hopes of Friendly AI failing.
Re the failed totalizing worldview, I’d say a lot of the failure comes down mostly not to the philosophical premises being incorrect (with a few exceptions), but rather a combo of underestimating how hard inference is from bare premises, without relying on empirical results, related to computational complexity and a failure to scale down from the idealized reasoner, combined with philosophical progress being mostly unnecessary for Friendly AI.
Which is why I refuse to generalize from Eliezer Yudkowsky and MIRI’s failure to make Friendly AI to all hopes of making Friendly AI failing, or even most hopes of Friendly AI failing.