When I say that our intuitive sense of power has to do with the easily exploitable opportunities available to an actor, that refers to opportunities which e.g. a ~human-level intelligence could notice and take advantage of. This has some strange edge cases, but it’s part of my thinking.
The key point is that AUPconceptual relaxes the problem:
If we could robustly penalize the agent for intuitively perceived gains in power (whatever that means), would that solve the problem?
This is not trivial. I think it’s a useful question to ask (especially because we can formalize so many of these power intuitions), even if none of the formalizations are perfect.
The key point is that AUPconceptual relaxes the problem:
If we could robustly penalize the agent for intuitively perceived gains in power (whatever that means), would that solve the problem?
This is not trivial.
Probably I’m just missing something, but I don’t see why you couldn’t say something similar about:
“preserve human autonomy”, “be nice”, “follow norms”, “do what I mean”, “be corrigible”, “don’t do anything I wouldn’t do”, “be obedient”
E.g.
If we could robustly reward the agent for intuitively perceived nice actions (whatever that means), would that solve the problem?
It seems like the main difference is that for power in particular is that there’s more hope that we could formalize power without reference to humans (which seems harder to do for e.g. “niceness”), but then my original point applies.
(This discussion was continued privately – to clarify, I was narrowly arguing that AUPconceptual is correct, but that this should only provide a mild update in favor of implementations working in the superintelligent case.)
When I say that our intuitive sense of power has to do with the easily exploitable opportunities available to an actor, that refers to opportunities which e.g. a ~human-level intelligence could notice and take advantage of. This has some strange edge cases, but it’s part of my thinking.
The key point is that AUPconceptual relaxes the problem:
This is not trivial. I think it’s a useful question to ask (especially because we can formalize so many of these power intuitions), even if none of the formalizations are perfect.
Probably I’m just missing something, but I don’t see why you couldn’t say something similar about:
E.g.
It seems like the main difference is that for power in particular is that there’s more hope that we could formalize power without reference to humans (which seems harder to do for e.g. “niceness”), but then my original point applies.
(This discussion was continued privately – to clarify, I was narrowly arguing that AUPconceptual is correct, but that this should only provide a mild update in favor of implementations working in the superintelligent case.)