Eliezer, I’m actually a little surprised at that last comment. As a Bayesian, I recognize that reality doesn’t care if I feel comfortable with whether or not I “know” an answer. Reality requires me to act on the basis of my current knowledge. If you think AI will go self-improving next year, you should be acting much differently than if you believe it will go self-improving in 2100. The difference isn’t as stark at 2025 versus 2075, but it’s still there.
What makes your unwillingness to commit even stranger is your advocacy that there’s significant existential risk associated with self-improving AI. It’s literally a life or death situation by you’re own valuation. So how are you going to act, like it will happen sooner or later?
Eliezer, I’m actually a little surprised at that last comment. As a Bayesian, I recognize that reality doesn’t care if I feel comfortable with whether or not I “know” an answer. Reality requires me to act on the basis of my current knowledge. If you think AI will go self-improving next year, you should be acting much differently than if you believe it will go self-improving in 2100. The difference isn’t as stark at 2025 versus 2075, but it’s still there.
What makes your unwillingness to commit even stranger is your advocacy that there’s significant existential risk associated with self-improving AI. It’s literally a life or death situation by you’re own valuation. So how are you going to act, like it will happen sooner or later?