I believe the common opinion of Zoltan Istvan is that he’s mostly interested in self-promotion, and so I am not surprised that he is emphasizing the more contentious possibilities.
That’s not really a fertile direction of criticism. Whether or not he’s engaging in self-promoting provocation doesn’t affect the validity of his position. Whether the USA can be trusted as the sole custodian of super intelligent AI is however an interesting question, since American exceptionalism appears to be in decline.
IAWYC, but disagree on the last sentence: it’s not an interesting question because it’s a wrong question. Superintelligent AI can’t have a “custodian”. Geopolitics of non-superintelligent AI that is smarter than a human but won’t FOOM is a completely different question, probably best debated by people who speculate about cyberwarfare since it’s more their field.
“non-superintelligent AI that is smarter than a human but won’t FOOM” …is most likely a better framing of the issue. I nevertheless think a fooming AI could be owned, so long as we have some channels of control open. That the creation or maintenance of such channels would be difficult doesn’t render the idea impossible in theory.
I believe the common opinion of Zoltan Istvan is that he’s mostly interested in self-promotion, and so I am not surprised that he is emphasizing the more contentious possibilities.
That’s not really a fertile direction of criticism. Whether or not he’s engaging in self-promoting provocation doesn’t affect the validity of his position. Whether the USA can be trusted as the sole custodian of super intelligent AI is however an interesting question, since American exceptionalism appears to be in decline.
IAWYC, but disagree on the last sentence: it’s not an interesting question because it’s a wrong question. Superintelligent AI can’t have a “custodian”. Geopolitics of non-superintelligent AI that is smarter than a human but won’t FOOM is a completely different question, probably best debated by people who speculate about cyberwarfare since it’s more their field.
“non-superintelligent AI that is smarter than a human but won’t FOOM” …is most likely a better framing of the issue. I nevertheless think a fooming AI could be owned, so long as we have some channels of control open. That the creation or maintenance of such channels would be difficult doesn’t render the idea impossible in theory.