Nice analogy! I approve of stuff like this. And in particular I agree that MIRI hasn’t convincingly argued that we can’t do significant good stuff (including maybe automating tons of alignment research) without agents.
Insofar as your point is that we don’t have to build agentic systems and nonagentic systems aren’t dangerous, I agree? If we could coordinate the world to avoid building agentic systems I’d feel a lot better.
Nice analogy! I approve of stuff like this. And in particular I agree that MIRI hasn’t convincingly argued that we can’t do significant good stuff (including maybe automating tons of alignment research) without agents.
Insofar as your point is that we don’t have to build agentic systems and nonagentic systems aren’t dangerous, I agree? If we could coordinate the world to avoid building agentic systems I’d feel a lot better.