If anyone says “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.” you should really ask for the details of what this means, how to measure whether safety is ahead. (E.g. is it “we did the bare minimum to make this product tolerable to society” vs. “we realize how hard superalignment will be and will be investing enough to have independent experts agree we have a 90% chance of being able to solve superalignment before we build something dangerous”)
If anyone says “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.” you should really ask for the details of what this means, how to measure whether safety is ahead. (E.g. is it “we did the bare minimum to make this product tolerable to society” vs. “we realize how hard superalignment will be and will be investing enough to have independent experts agree we have a 90% chance of being able to solve superalignment before we build something dangerous”)
I think what he means is “try to be less unsafe than OpenAI while beating those bastards to ASI”.
Come on now, there is nothing to worry about here. They are just going to “move fast and break things”...