and the subsequent discussion is relevant (and not addressed by Eliezer).
If the agency is not inextricably tied to the intelligence, then maybe a reasonable path forward is to try to wring as much productivity as we can out of the passive, superhuman, quasi-oracular just-dumb-data-predictors. And avoid as much as we can ever creating closed-loop, open-ended, free-rein agents.
is what I was trying to get at. I did not see a good counter-argument in that thread.
As I read the thread, people don’t seem to be arguing that you can’t make pure data-predictors that don’t turn agentic, but instead are arguing that they’re going to be heavily limited due to lacking unbounded agency. Which seems basically correct to me.
There’s the Gwern-style argument that successive generations AIs will get more agentive as a side of effect of the market demanding more powerful AIs. There’s a counterargument that non-one wants power that they can’t control, so that AIs will never be more than force multipliers .. although that’s still fairly problematic.
Could you link to an example? I wonder if you are misinterpreting it, which I will be better able to explain if I see the exact claims.
It looks like this comment https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities?commentId=ePAXXk8AvpdGeynHe
and the subsequent discussion is relevant (and not addressed by Eliezer).
is what I was trying to get at. I did not see a good counter-argument in that thread.
As I read the thread, people don’t seem to be arguing that you can’t make pure data-predictors that don’t turn agentic, but instead are arguing that they’re going to be heavily limited due to lacking unbounded agency. Which seems basically correct to me.
There’s the Gwern-style argument that successive generations AIs will get more agentive as a side of effect of the market demanding more powerful AIs. There’s a counterargument that non-one wants power that they can’t control, so that AIs will never be more than force multipliers .. although that’s still fairly problematic.
People will probably want to be ahead in the race for power, while still maintaining control.