I’m… not sure what you mean by this. And I wouldn’t be against putting a whole CEV-ish human morality in an AI, either. My point is that there seems to be a big space between your Outcome Pump fail example and highly paternalistic AIs of the sort that caused Failed Utopia 4-2.
It reminds me a little bit about how modern computers are only occasionally used for computation.
Anything smarter-than-human should be regarded as containing unimaginably huge forces held in check only by the balanced internal structure of those forces, since there is nothing which could resist them if unleashed. The degree of ‘obedience’ makes very little difference to this fact, which must be dealt with before you can go on to anything else.
As I understand it, a AI is expected to make huge, inventive efforts to fulfill its orders as it understands them.
You know how sometimes people cause havoc while meaning well? Imagine something immensely more powerful and probably less clueful making the same mistake.
I’m… not sure what you mean by this. And I wouldn’t be against putting a whole CEV-ish human morality in an AI, either. My point is that there seems to be a big space between your Outcome Pump fail example and highly paternalistic AIs of the sort that caused Failed Utopia 4-2.
It reminds me a little bit about how modern computers are only occasionally used for computation.
Anything smarter-than-human should be regarded as containing unimaginably huge forces held in check only by the balanced internal structure of those forces, since there is nothing which could resist them if unleashed. The degree of ‘obedience’ makes very little difference to this fact, which must be dealt with before you can go on to anything else.
As I understand it, a AI is expected to make huge, inventive efforts to fulfill its orders as it understands them.
You know how sometimes people cause havoc while meaning well? Imagine something immensely more powerful and probably less clueful making the same mistake.