Yet there is a difference when scaling. If Gwern is right (or if LM because more like what he’s describing as they get bigger), then we end up with a single agent which we probably shouldn’t trust because of all our many worries with alignment. On the other hand, if scaled up LM are non-agentic/simulator-like, then they would stay motivationless, and there would be at least the possibility to use them to help alignment research for example, by trying to simulate non-agenty systems.
Yeah, I agree that in the future there is a difference. I don’t think we know which of these situations we’re going to be in (which is maybe what you’re arguing). Idk what Gwern predicts.
Exactly. I’m mostly arguing that I don’t think the case for the agent situation is as clear cut as I’ve seen some people defend it, which doesn’t mean it’s not possibly true.
Yeah, I agree that in the future there is a difference. I don’t think we know which of these situations we’re going to be in (which is maybe what you’re arguing). Idk what Gwern predicts.
Exactly. I’m mostly arguing that I don’t think the case for the agent situation is as clear cut as I’ve seen some people defend it, which doesn’t mean it’s not possibly true.