we might disagree some. I think the original comment is pointing at the (reasonable as far i can tell) claim that oracular AI can have agent like qualities if it produces plans that people follow
I agree that it can be possible to turn such a system into an agent. I think the original comment is defending a stronger claim that there’s a sort of no free lunch theorem: either you don’t act on the outputs of the oracle at all, or it’s just as much of an agent as any other system.
I think the stronger claim is clearly not true. The worrying thing about powerful agents is that their outputs are selected to cause certain outcomes, even if you try to prevent those outcomes. So depending on the actions you’re going to take in response to its outputs, its outputs have to be different. But the point of an oracle is to not have that property—its outputs are decided by a criterion (something like truth) -- that is independent of the actions you’re going to take in response[1]. So if you respond differently to the outputs, they cause different outcomes. Assuming you’ve succeeded at building the oracle to specification, it’s clearly not the case that the oracle has the worrying property of agents just because you act on its outputs.
I don’t disagree that by either hooking the oracle up in a scaffolded feedback loop with the environment, or getting it to output plans, you could extract more agency from it. Of the two I think the scaffolding can in principle easily produce dangerous agency in the same way long-horizon RL can, but that the version where you get it to output a plan is much less worrrying (I can argue for that in a separate comment if you like).
thanks, I appreciate the reply. It sounds like I have somewhat wider error bars but mostly agree on everything but the last sentence, where I think it’s plausibly but not certainly less worrying. If you felt like you had crisp reasons why you’re less worried, I’d be happy to hear them, but only if it feels positive for you to produce such a thing.
Good point. I think that if you couple the answers of an oracle to reality by some random process, then you are probably fine.
However, many want to use the outputs of the oracle in very obvious ways. For instance, you ask it what code you should put into your robot, and then you just put the code into the robot.
Could we have an oracle (i.e. it was trained according to some Truth criterion) where when you use it very straightforwardly, it exerts optimization pressure on the world?
we might disagree some. I think the original comment is pointing at the (reasonable as far i can tell) claim that oracular AI can have agent like qualities if it produces plans that people follow
I agree that it can be possible to turn such a system into an agent. I think the original comment is defending a stronger claim that there’s a sort of no free lunch theorem: either you don’t act on the outputs of the oracle at all, or it’s just as much of an agent as any other system.
I think the stronger claim is clearly not true. The worrying thing about powerful agents is that their outputs are selected to cause certain outcomes, even if you try to prevent those outcomes. So depending on the actions you’re going to take in response to its outputs, its outputs have to be different. But the point of an oracle is to not have that property—its outputs are decided by a criterion (something like truth) -- that is independent of the actions you’re going to take in response[1]. So if you respond differently to the outputs, they cause different outcomes. Assuming you’ve succeeded at building the oracle to specification, it’s clearly not the case that the oracle has the worrying property of agents just because you act on its outputs.
I don’t disagree that by either hooking the oracle up in a scaffolded feedback loop with the environment, or getting it to output plans, you could extract more agency from it. Of the two I think the scaffolding can in principle easily produce dangerous agency in the same way long-horizon RL can, but that the version where you get it to output a plan is much less worrrying (I can argue for that in a separate comment if you like).
I’m ignoring the self-fulfilling prophecy case here.
thanks, I appreciate the reply.
It sounds like I have somewhat wider error bars but mostly agree on everything but the last sentence, where I think it’s plausibly but not certainly less worrying.
If you felt like you had crisp reasons why you’re less worried, I’d be happy to hear them, but only if it feels positive for you to produce such a thing.
Good point. I think that if you couple the answers of an oracle to reality by some random process, then you are probably fine.
However, many want to use the outputs of the oracle in very obvious ways. For instance, you ask it what code you should put into your robot, and then you just put the code into the robot.
Could we have an oracle (i.e. it was trained according to some Truth criterion) where when you use it very straightforwardly, it exerts optimization pressure on the world?