I think that’s a bad beaver to rely on, any way you slice it. If you’re imagining, say, GPT-X giving us some extremely capable AI, then it’s hands-on enough you’ve just given humans too much power. If we’re talking AGI, I agree with Yudkowsky; we’re far more likely to get it wrong then get it right.
If you have a different take I’m curious, but I don’t see any way that it’s reassuring.
IMO we honestly need a technological twist of some kind to avoid AI. Even if we get it right; life with a God just takes a lot of the fun out of it.
Sidles closer
Have you heard of… philosophy of universal norms?
Perhaps the human experience thus far is more representative then the present?
Perhaps… we can expect to go a little closer to it when we push further out?
Perhaps… things might get a little more universal in this here cluttered with reality world.
So for a start...
Maybe people are right to expect things will get cool...