You keep misreading me. I am not claiming that to gave a solution. I am claiming that MIRI is overly pessimistic about the problem, and offering an over engineered solution. Inasmuch ad you say there is a middle ground, you kind if agree.
The thing is, MIRI doesn’t claim that a superintelligent world-destroying paperclipper is the most likely scenario. It’s just illustrative of why we have an actual problem: because you don’t need malice to create an Unfriendly AI that completely fucks everything up.
So how did you like CATE, over in that other thread? That AI is non-super-human, doesn’t go FOOM, doesn’t acquire nanotechnology, can’t do anything a human upload couldn’t do… and still can cause quite a lot of damage simply because it’s more dedicated than we are, suffers fewer cognitive flaws than us, has more self-knowledge than us, and has no need for rest or food.
I mean, come on: what if a non-FOOMed but Unfriendly AI becomes as rich as Bill Gates? After all, if Bill Gates did it while human, than surely an AI as smart as Bill Gates but without his humanity can do the same thing, while causing a bunch more damage to human values because it simply does not feel Gates’ charitable inclinations.
You keep misreading me. I am not claiming that to gave a solution. I am claiming that MIRI is overly pessimistic about the problem, and offering an over engineered solution. Inasmuch ad you say there is a middle ground, you kind if agree.
The thing is, MIRI doesn’t claim that a superintelligent world-destroying paperclipper is the most likely scenario. It’s just illustrative of why we have an actual problem: because you don’t need malice to create an Unfriendly AI that completely fucks everything up.
To make reliable predictions, more realistic examples are needed.
So how did you like CATE, over in that other thread? That AI is non-super-human, doesn’t go FOOM, doesn’t acquire nanotechnology, can’t do anything a human upload couldn’t do… and still can cause quite a lot of damage simply because it’s more dedicated than we are, suffers fewer cognitive flaws than us, has more self-knowledge than us, and has no need for rest or food.
I mean, come on: what if a non-FOOMed but Unfriendly AI becomes as rich as Bill Gates? After all, if Bill Gates did it while human, than surely an AI as smart as Bill Gates but without his humanity can do the same thing, while causing a bunch more damage to human values because it simply does not feel Gates’ charitable inclinations.