I think that for a long time the alignment community was betting on recursive self-improvement happening pretty early in which there wouldn’t a gap between AI being able to develop bioweapons and developing AGI.
Over time, people have updated towards thinking that the path towards AGI will be slower and more gradual, and I think people were either hoping that we could either tank whatever harms happen in this period or that governments might deal with it.
Now that some of these harms are basically imminent, I think the community has updated towards being more worried about some of the less weird harms like misinformation or bioweapons, and I agree with this, but I don’t want to take this too far either, as I mostly just see this work as buying time for us to solve alignment.
I think that for a long time the alignment community was betting on recursive self-improvement happening pretty early in which there wouldn’t a gap between AI being able to develop bioweapons and developing AGI.
Over time, people have updated towards thinking that the path towards AGI will be slower and more gradual, and I think people were either hoping that we could either tank whatever harms happen in this period or that governments might deal with it.
Now that some of these harms are basically imminent, I think the community has updated towards being more worried about some of the less weird harms like misinformation or bioweapons, and I agree with this, but I don’t want to take this too far either, as I mostly just see this work as buying time for us to solve alignment.