Creating a global ban on gain of function research : coordinating the world to prevent solving AGI :: solving the tiling agents problem : solving the whole alignment problem.
I don’t know when Miri started working on Tiling Agents, but was published in 2013. In retrospect, it seems like we would not have wanted people to wait that long to work on alignment. And it’s especially problematic now that timelines are shorter.
I mean, assume a coordinated effort to ban gain-of-function research succeeds eight years from now; even if we then agree that policy is the way to go, it may be too late.
In retrospect, it seems like we would not have wanted people to wait that long to work on alignment
I don’t buy this characterization. This might sound at odds with my comment above, but working on tiling agents was an attempts at solving alignment, not deferring solving alignment.
The way you solve a thorny, messy, real-world technical problem, is to first solve a easier problem with simplified assumptions, and then gradually add in more complexity.
I agree that this analogizes less tightly to the political action case, because solving the problem of putting a ban on gain of function research is not a strictly necessary step for creating a ban on AI, the way solving Tiling agents is (or at least seemed at the time to be) a necessary step for solving alignment.
I don’t buy this characterization. This might sound at odds with my comment above, but working on tiling agents was an attempts at solving alignment, not deferring solving alignment.
I totally agree. My point was not that tiling agents isn’t alignment research (it definitely is), it’s that the rest of the community wasn’t waiting for that success to start doing stuff.
I don’t know when Miri started working on Tiling Agents, but was published in 2013. In retrospect, it seems like we would not have wanted people to wait that long to work on alignment. And it’s especially problematic now that timelines are shorter.
I mean, assume a coordinated effort to ban gain-of-function research succeeds eight years from now; even if we then agree that policy is the way to go, it may be too late.
I don’t buy this characterization. This might sound at odds with my comment above, but working on tiling agents was an attempts at solving alignment, not deferring solving alignment.
The way you solve a thorny, messy, real-world technical problem, is to first solve a easier problem with simplified assumptions, and then gradually add in more complexity.
I agree that this analogizes less tightly to the political action case, because solving the problem of putting a ban on gain of function research is not a strictly necessary step for creating a ban on AI, the way solving Tiling agents is (or at least seemed at the time to be) a necessary step for solving alignment.
I totally agree. My point was not that tiling agents isn’t alignment research (it definitely is), it’s that the rest of the community wasn’t waiting for that success to start doing stuff.