I’d say the best argument against it is a combination of precedent-setting concerns, where once they started open-sourcing, it’d be hard to stop doing it even if it becomes dangerous to do, combined with misuse risk for now seeming harder to solve than misalignment risk, and in order for open-source to be good, you need to prevent both misalignment and people misusing their models.
I agree Sakana AI is safe to open source, but I’m quite sure sometime in the next 10-15 years, the AIs that do get developed will likely be very dangerous to open-source them, at least for several years.
Hot take: for now, I think it’s likelier than not that even fully uncontrolled proliferation of automated ML scientists like https://sakana.ai/ai-scientist/ would still be a net differential win for AI safety research progress, for pretty much the same reasons from
https://beren.io/2023-11-05-Open-source-AI-has-been-vital-for-alignment/….
I’d say the best argument against it is a combination of precedent-setting concerns, where once they started open-sourcing, it’d be hard to stop doing it even if it becomes dangerous to do, combined with misuse risk for now seeming harder to solve than misalignment risk, and in order for open-source to be good, you need to prevent both misalignment and people misusing their models.
I agree Sakana AI is safe to open source, but I’m quite sure sometime in the next 10-15 years, the AIs that do get developed will likely be very dangerous to open-source them, at least for several years.