I’m the Astera researcher that Nathan spoke to. This is a pretty bad misrepresentation of my views based on a 5 minute conversation that Nathan had with me about this subject (at the end of a technical interview).
A few responses:
We do publish open source code at https://github.com/Astera-org but we are considering moving to closed source at some point in the future for safety concerns
It is untrue that we are “not interested in securing [our] code or models against malicious actors”, but it is true that we are not currently working on the interventions suggested by Nathan
My personal view is that AI alignment needs to be tailored to the model, an approach that I am working on articulating further and hope to post on this forum
Steve Byrnes works at the Astera institute on alignment issues
I’m the Astera researcher that Nathan spoke to. This is a pretty bad misrepresentation of my views based on a 5 minute conversation that Nathan had with me about this subject (at the end of a technical interview).
A few responses:
We do publish open source code at https://github.com/Astera-org but we are considering moving to closed source at some point in the future for safety concerns
It is untrue that we are “not interested in securing [our] code or models against malicious actors”, but it is true that we are not currently working on the interventions suggested by Nathan
My personal view is that AI alignment needs to be tailored to the model, an approach that I am working on articulating further and hope to post on this forum
Steve Byrnes works at the Astera institute on alignment issues