Promoted to curated: I was definitely hesitant to promote visibility of this post much more, since I do think the knowledge in this post is largely about how to make more dangerous AI, and I would like there to be less dangerous AI. However, I do think that almost all the actors that are most likely to cause harm with this information almost certainly already have this information.
The key exception are maybe governments, which I feel confused about, since I do think them being better informed about model training helps with regulation, but maybe it will also allow them to pump more money into AI and accelerate the arms race, and that would be sad.
Overall, I think the benefits outweighed the cost here, since I do think it’s a quite well-written post, and it gave me a bunch of useful abstractions on how to think about model training, as someone who hasn’t done this very much, and I do think that will help me and others handle AGI stuff better.
Promoted to curated: I was definitely hesitant to promote visibility of this post much more, since I do think the knowledge in this post is largely about how to make more dangerous AI, and I would like there to be less dangerous AI. However, I do think that almost all the actors that are most likely to cause harm with this information almost certainly already have this information.
The key exception are maybe governments, which I feel confused about, since I do think them being better informed about model training helps with regulation, but maybe it will also allow them to pump more money into AI and accelerate the arms race, and that would be sad.
Overall, I think the benefits outweighed the cost here, since I do think it’s a quite well-written post, and it gave me a bunch of useful abstractions on how to think about model training, as someone who hasn’t done this very much, and I do think that will help me and others handle AGI stuff better.