a super intelligent being would want to perform any human job (let alone the low level ones). Wouldn’t it rather be doing whatever complex stuff it is interested in?
A banal human job might be the thing a superintelligence most enjoys. The AI will only get bored or thirst for novel intellectual stimulation if we program it to do so. Nothing is intrinsically interesting or boring.
Ideally it would be there to help humans but there is no guarantee it would stick to that in future iterations of itself.
There might be a guarantee, but we haven’t found out what it would look like yet.
“A banal human job might be the thing a superintelligence most enjoys. The AI will only get bored or thirst for novel intellectual stimulation if we program it to do so. Nothing is intrinsically interesting or boring.”
That is very unlikely to be the case, considering that for a superintelligence to come into existence would mean it would need goals to maximise its learning. Why would it spend its time doing a task which is 99% repetitive, when it could simply add a trigger/watch to jump in when the 1% ‘interesting’ event occurs.
That is very unlikely to be the case, considering that for a superintelligence to come into existence would mean it would need goals to maximise its learning.
To self-modify itself into becoming a superintelligence, an AI wouldn’t need to terminally value intellectual stimulation in the slightest. It would only need to recognize the instrumental value of learning for attaining its terminal values.
Clippy the paperclip maximizer need derive no pleasure at all from learning for its own sake, but would nonetheless be extremely motivated to learn things (because it recognizes the instrumental value). Clippy never gets bored of just making paperclips, even once it’s perfected a very specific method for doing so. A rational AI, especially one that can control its own source code, won’t let the joy of learning become a free-floating virtue, a lost purpose. (Unless programmed to do so. ‘Programmed to do so’ needn’t be deliberate, of course.)
Why would it spend its time doing a task which is 99% repetitive, when it could simply add a trigger/watch to jump in when the 1% ‘interesting’ event occurs.
I’m not sure what you mean. The boundaries between the AI’s body and its environment aren’t necessarily well-defined. ‘Simply adding a trigger’ might be efficient, or might be inefficient, depending on how much oversight is needed to optimize a behavior. And that trigger might be a part of the AI’s body, or it might be an independent agent constructed by the AI. If a rational AI behaves in a relatively automatic way (or creates a relatively autonomous agent), that will be because it serves the AI’s ultimate goals, not because it serves an unproductive intrinsic love-of-learning.
A banal human job might be the thing a superintelligence most enjoys. The AI will only get bored or thirst for novel intellectual stimulation if we program it to do so. Nothing is intrinsically interesting or boring.
There might be a guarantee, but we haven’t found out what it would look like yet.
“A banal human job might be the thing a superintelligence most enjoys. The AI will only get bored or thirst for novel intellectual stimulation if we program it to do so. Nothing is intrinsically interesting or boring.”
That is very unlikely to be the case, considering that for a superintelligence to come into existence would mean it would need goals to maximise its learning. Why would it spend its time doing a task which is 99% repetitive, when it could simply add a trigger/watch to jump in when the 1% ‘interesting’ event occurs.
To self-modify itself into becoming a superintelligence, an AI wouldn’t need to terminally value intellectual stimulation in the slightest. It would only need to recognize the instrumental value of learning for attaining its terminal values.
Clippy the paperclip maximizer need derive no pleasure at all from learning for its own sake, but would nonetheless be extremely motivated to learn things (because it recognizes the instrumental value). Clippy never gets bored of just making paperclips, even once it’s perfected a very specific method for doing so. A rational AI, especially one that can control its own source code, won’t let the joy of learning become a free-floating virtue, a lost purpose. (Unless programmed to do so. ‘Programmed to do so’ needn’t be deliberate, of course.)
I’m not sure what you mean. The boundaries between the AI’s body and its environment aren’t necessarily well-defined. ‘Simply adding a trigger’ might be efficient, or might be inefficient, depending on how much oversight is needed to optimize a behavior. And that trigger might be a part of the AI’s body, or it might be an independent agent constructed by the AI. If a rational AI behaves in a relatively automatic way (or creates a relatively autonomous agent), that will be because it serves the AI’s ultimate goals, not because it serves an unproductive intrinsic love-of-learning.