I’m reminded of the old Star Trek episode with the super humans that were found in cryosleep that then took over the Enterprise.
While I do agree that this could be one potential counter to AI (unless the relative speed things overwhelm) but also see a similar type of risk from the engineered humans. In that view, the program needs to be something that is widely implemented (which would also make it potentially a x-risk case itself) or we could easily find ourselves having created a ruler class that views ordinary humans as subhuman on not deserving of full rights. Not sure how that gets done though—from a purely practical and politically viable approach.
I certainly think if we’re doing things piecemeal we would want somewhat smarter people before we have much longer living people.
Yeah, in Star Trek, genetic engineering for increased intelligence reliably produces arrogant bastards, but that’s just so they don’t have to show the consequences of genetic engineering on humans...
I’m reminded of the old Star Trek episode with the super humans that were found in cryosleep that then took over the Enterprise.
While I do agree that this could be one potential counter to AI (unless the relative speed things overwhelm) but also see a similar type of risk from the engineered humans. In that view, the program needs to be something that is widely implemented (which would also make it potentially a x-risk case itself) or we could easily find ourselves having created a ruler class that views ordinary humans as subhuman on not deserving of full rights. Not sure how that gets done though—from a purely practical and politically viable approach.
I certainly think if we’re doing things piecemeal we would want somewhat smarter people before we have much longer living people.
Yeah, in Star Trek, genetic engineering for increased intelligence reliably produces arrogant bastards, but that’s just so they don’t have to show the consequences of genetic engineering on humans...