Thinkamancy is specified to have a loophole where you can seek their ‘higher good’, as exhibited already by what’s-her-name.
For Hamster, it’s really obvious that having Stanley in control is seriously endangering Stanley’s project of world domination. On a more philosophical ground, he’s failing to grow and is becoming unhappier as his limits become obvious even to him. That’s two ways in which Stanley’s removal is an improvement.
How could he remove Stanley’s control? The thinkamancy could be undone by a turnermancer (like the one recently introduced, working for Charlie) and there’s apparently a -mancy which specializes just in undoing other -mancies (one example given being undoing a flying bonus). Or heck, he could just ask his mathamancy artifact; I mean, if it can answer questions like ‘will Charlie regret blowing all his calculations on question X?’, it should be able to answer a question like how to undo his loyalty.
It doesn’t matter if he could escape if he wanted to, because he couldn’t want to escape unless he already had done so. Friendliness is stable under self-modification.
Edit: I managed to type that whole thing without quite realizing how perfect Parson is as a lay-accessible model for what an alien intelligence looks like. The perversity of his ingenuity, stemming from the fact that he doesn’t share the prejudices of the people around him, is a major part of what people fail to anticipate in AI.
Thinkamancy is specified to have a loophole where you can seek their ‘higher good’, as exhibited already by what’s-her-name.
For Hamster, it’s really obvious that having Stanley in control is seriously endangering Stanley’s project of world domination. On a more philosophical ground, he’s failing to grow and is becoming unhappier as his limits become obvious even to him. That’s two ways in which Stanley’s removal is an improvement.
How could he remove Stanley’s control? The thinkamancy could be undone by a turnermancer (like the one recently introduced, working for Charlie) and there’s apparently a -mancy which specializes just in undoing other -mancies (one example given being undoing a flying bonus). Or heck, he could just ask his mathamancy artifact; I mean, if it can answer questions like ‘will Charlie regret blowing all his calculations on question X?’, it should be able to answer a question like how to undo his loyalty.
It doesn’t matter if he could escape if he wanted to, because he couldn’t want to escape unless he already had done so. Friendliness is stable under self-modification.
Edit: I managed to type that whole thing without quite realizing how perfect Parson is as a lay-accessible model for what an alien intelligence looks like. The perversity of his ingenuity, stemming from the fact that he doesn’t share the prejudices of the people around him, is a major part of what people fail to anticipate in AI.