I have long complained about SI’s narrow and obsessive focus on the “utility function” aspect of AI—simply put, SI assumes that future superintelligent systems will be driven by certain classes of mechanism that are still only theoretical, and which are very likely to be superceded by other kinds of mechanism that have very different properties. Even worse, the “utility function” mechanism favored by SI is quite likely to be so unstable that it will never allow an AI to achieve any kind of human-level intelligence, never mind the kind of superintelligence that would be threatening.
I often observe very intelligent folks acting irrationally. I suspect superintelligent AI’s might act superirrationally. Perhaps the focus should be on creating rational AI’s first. Any superintelligent being would have to be first and foremost superrational, or we are in for a world of trouble. Actually, in my experience, rationality trumps intelligence every time.
I often observe very intelligent folks acting irrationally. I suspect superintelligent AI’s might act superirrationally. Perhaps the focus should be on creating rational AI’s first. Any superintelligent being would have to be first and foremost superrational, or we are in for a world of trouble. Actually, in my experience, rationality trumps intelligence every time.