I think the concern stands even without a FOOM; if AI gets a good bit smarter than us, however that happens (design plus learning, or self-improvement), it’s going to do whatever it wants.
As for your “ideal Bayesian” intuition, I think the challenge is deciding WHAT to apply it to. The amount of computational power needed to apply it to every thing and every concept on earth is truly staggering. There is plenty of room for algorithmic improvement, and it doesn’t need to get that good to outwit (and out-engineer) us.
I think the concern stands even without a FOOM; if AI gets a good bit smarter than us, however that happens (design plus learning, or self-improvement), it’s going to do whatever it wants.
As for your “ideal Bayesian” intuition, I think the challenge is deciding WHAT to apply it to. The amount of computational power needed to apply it to every thing and every concept on earth is truly staggering. There is plenty of room for algorithmic improvement, and it doesn’t need to get that good to outwit (and out-engineer) us.