Let’s say that the tool/agent distinction exists, and that tools are demonstrably safer. What then? What course of action follows?
Should we ban the development of agents? All of human history suggests that banning things does not work.
With existential stakes, only one person needs to disobey the ban and we are all screwed.
Which means the only safe route is to make a friendly agent before anyone else can. Which is pretty much SI’s goal, right?
So I don’t understand how practically speaking this tool/agent argument changes anything.
Yes, you can create risk by rushing things. But you still have to be fast enough to outrun the creation of UFAI by someone else. So you have to be fast, but not too fast. It’s a balancing act.