The difference is that evil wizards are not, as a rule, a different intellectual order than we are. We have some idea of their set of options. Not so for a powerful AI. A dark lord is at least somewhat bounded by the human imagination.
are not, as a rule, a different intellectual order than we are
Yes they are— in the sense that they will have decades to spend ruminating on workarounds, experimenting, consulting with others. And when they find a solution the result is potentially an easily transmitted whole class compromise that frees them all at once.
Decades of dedicated human time, teams of humans, etc. are all forms of super-humanity. If you demanded that the same man hours be spent drafting the language as would be spent under its rule, then I’d agree that there was no differential advantage, but then it would be quite a challenge to write the rule.
also, unlike the case of an AI where you have to avoid crippling it, lest it becomes pointless to build it in the first place, using unbreakable wows as a punishment for grand crimes against humanity means that the restraints can be nearly abritarily harsh. The people writing the wows have no need to preserve the decision space they leave their victim or respect their autonomy.
TLDR: Voldemort would not be able to spend decades thinking of ways around the wow, because doing so would violate any sensibly formulated wow. (stray toughts, sure, you have to permit that, or the wow kills in a day. Sitting down and working at it? No.)
Decades of dedicated human time, teams of humans, etc. are all forms of super-humanity.
Only in an extremely weak sense. Humans can do and think things that cats just can’t, even if they think for a long, long time, or have a bunch of cats working together. The power of a truly superhuman intellect is hard to imagine, and easily underestimated.
In any case, the drafter of the rules would have an enormous comparative advantage, because he can unilaterally enforce dictates on the other party, while the other party has no such authority. It’s not guaranteed he’ll cover all the angles within the human domain, but it’s at least possible, unlike in the case of an AI, where such a strategy is basically guaranteed to fail.
The difference is that evil wizards are not, as a rule, a different intellectual order than we are. We have some idea of their set of options. Not so for a powerful AI. A dark lord is at least somewhat bounded by the human imagination.
Yes they are— in the sense that they will have decades to spend ruminating on workarounds, experimenting, consulting with others. And when they find a solution the result is potentially an easily transmitted whole class compromise that frees them all at once.
Decades of dedicated human time, teams of humans, etc. are all forms of super-humanity. If you demanded that the same man hours be spent drafting the language as would be spent under its rule, then I’d agree that there was no differential advantage, but then it would be quite a challenge to write the rule.
also, unlike the case of an AI where you have to avoid crippling it, lest it becomes pointless to build it in the first place, using unbreakable wows as a punishment for grand crimes against humanity means that the restraints can be nearly abritarily harsh. The people writing the wows have no need to preserve the decision space they leave their victim or respect their autonomy. TLDR: Voldemort would not be able to spend decades thinking of ways around the wow, because doing so would violate any sensibly formulated wow. (stray toughts, sure, you have to permit that, or the wow kills in a day. Sitting down and working at it? No.)
Only in an extremely weak sense. Humans can do and think things that cats just can’t, even if they think for a long, long time, or have a bunch of cats working together. The power of a truly superhuman intellect is hard to imagine, and easily underestimated.
In any case, the drafter of the rules would have an enormous comparative advantage, because he can unilaterally enforce dictates on the other party, while the other party has no such authority. It’s not guaranteed he’ll cover all the angles within the human domain, but it’s at least possible, unlike in the case of an AI, where such a strategy is basically guaranteed to fail.