We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.
In fairness, there’s a high-integrity version of this that’s net good:
Accept plenty of capital.
Observe that safety is not currently clearly ahead.
Spent the next n years working entirely on alignment, until and unless it’s solved.
This isn’t the outcome I expect, and it wouldn’t stop other actors from releasing catastrophically unsafe systems, but given that Ilya Sutskever has to the best of my (limited) knowledge been fairly high-integrity in the past, it’s worth noting as a possibility. It would be genuinely lovely to see them use a ton of venture capital for alignment work.
In fairness, there’s a high-integrity version of this that’s net good:
Accept plenty of capital.
Observe that safety is not currently clearly ahead.
Spent the next n years working entirely on alignment, until and unless it’s solved.
This isn’t the outcome I expect, and it wouldn’t stop other actors from releasing catastrophically unsafe systems, but given that Ilya Sutskever has to the best of my (limited) knowledge been fairly high-integrity in the past, it’s worth noting as a possibility. It would be genuinely lovely to see them use a ton of venture capital for alignment work.