A lot depends on what we mean by “superintelligent.” But yes, there’s a level of intelligence above which I’m fairly confident that I would change the world, as rapidly as practical, because I can. Why wouldn’t you?
Not just because I can. Maybe for other reasons, like the fact that I still care about the punier humans and want to make it better for them. That depends on preferences that an AI might or might not have.
It’s not really about what I would do; it’s the fact that we don’t know what an arbitrary superintelligence will or won’t decide to do.
(I’m thinking of “superintelligence” as “smart enough to do more or less whatever it wants by sheer thinkism,” which I’ve already said I agree is possible. Is this nonstandard?)
Sure, “because I have preferences which changing the world would more effectively maximize than leaving it as it is” is more accurate than “because I can”. And, sure, maybe an arbitrary superintelligence would have no such preferences, but I’m not confident of that.
A lot depends on what we mean by “superintelligent.” But yes, there’s a level of intelligence above which I’m fairly confident that I would change the world, as rapidly as practical, because I can. Why wouldn’t you?
Not just because I can. Maybe for other reasons, like the fact that I still care about the punier humans and want to make it better for them. That depends on preferences that an AI might or might not have.
It’s not really about what I would do; it’s the fact that we don’t know what an arbitrary superintelligence will or won’t decide to do.
(I’m thinking of “superintelligence” as “smart enough to do more or less whatever it wants by sheer thinkism,” which I’ve already said I agree is possible. Is this nonstandard?)
Sure, “because I have preferences which changing the world would more effectively maximize than leaving it as it is” is more accurate than “because I can”. And, sure, maybe an arbitrary superintelligence would have no such preferences, but I’m not confident of that.
(Nope, it’s standard (locally).)