In fact, if our superintelligent program has no hard-coded survival mechanism, it is more likely to switch itself off than to destroy the human race willfully.
This guys seems to miss the point. Most possible superintelligences would destroy the human race incidentally.
If you specify a reasonable enumeration of utility functions (such as shortest first) - and cross off the superintelligences that don’t do anything very dramatic as being not very “super”—this seems pretty reasonable.
This guys seems to miss the point. Most possible superintelligences would destroy the human race incidentally.
Is it established that most would?
If you specify a reasonable enumeration of utility functions (such as shortest first) - and cross off the superintelligences that don’t do anything very dramatic as being not very “super”—this seems pretty reasonable.
Yes, ok.