In the email above, clearly stated, is a line of reasoning that has lead very competent people to work extremely hard to build potentially-omnicidal machines.
But also Altman’s actions since are very clearly counter to the spirit of that email. I could imagine a version of this plan, executed with earnestness and attempted cooperativeness, that wasn’t nearly as harmful (though still pretty bad, probably).
Part of the problem is that “we should build it first, before the less trustworthy” is a meme that universalizes terribly.
Part of the problem is that Sam Altman was not actually sincere in the the execution of that sentiment, regardless of how sincere his original intentions were.
It’s not clear to me that there was actually an option to build a $100B company with competent people around the world who would’ve been united in conditionally shutting down and unconditionally pushing for regulation. I don’t know that the culture and concepts of people who do a lot of this work in the business world would allow for such a plan to be actively worked on.
You maybe right. Maybe the top talent wouldn’t have gotten on board with that mission, and so it wouldn’t have gotten top talent.
I bet Illya would have been in for that mission, and I think a surprisingly large number of other top researchers might have been in for it as well. Obviously we’ll never know.
And I think if the founders are committed to a mission, and they reaffirm their commitment in every meeting, they can go surprisingly far in making in the culture of an org.
Maybe there’s a hope there, but I’ll point out that many of the people needed to run a business (finance, legal, product, etc) are not idealistic scientists who would be willing to have their equity become worthless.
Those people don’t get substantial equity in most business in the world. They generally get paid a salary and benefits in exchange for their work, and that’s about it.
Ilya is demonstrably not in on that mission, since his step immediately after leaving OpenAI was to found an additional AGI company and thus increase x-risk.
Also, Sam Altman is a pretty impressive guy. I wonder what would have happened if he had decided to try to stop humanity from building AGI, instead of trying to be the one to do it instead of google.
In the email above, clearly stated, is a line of reasoning that has lead very competent people to work extremely hard to build potentially-omnicidal machines.
Absolutely true.
But also Altman’s actions since are very clearly counter to the spirit of that email. I could imagine a version of this plan, executed with earnestness and attempted cooperativeness, that wasn’t nearly as harmful (though still pretty bad, probably).
Part of the problem is that “we should build it first, before the less trustworthy” is a meme that universalizes terribly.
Part of the problem is that Sam Altman was not actually sincere in the the execution of that sentiment, regardless of how sincere his original intentions were.
It’s not clear to me that there was actually an option to build a $100B company with competent people around the world who would’ve been united in conditionally shutting down and unconditionally pushing for regulation. I don’t know that the culture and concepts of people who do a lot of this work in the business world would allow for such a plan to be actively worked on.
You maybe right. Maybe the top talent wouldn’t have gotten on board with that mission, and so it wouldn’t have gotten top talent.
I bet Illya would have been in for that mission, and I think a surprisingly large number of other top researchers might have been in for it as well. Obviously we’ll never know.
And I think if the founders are committed to a mission, and they reaffirm their commitment in every meeting, they can go surprisingly far in making in the culture of an org.
Maybe there’s a hope there, but I’ll point out that many of the people needed to run a business (finance, legal, product, etc) are not idealistic scientists who would be willing to have their equity become worthless.
Those people don’t get substantial equity in most business in the world. They generally get paid a salary and benefits in exchange for their work, and that’s about it.
Ilya is demonstrably not in on that mission, since his step immediately after leaving OpenAI was to found an additional AGI company and thus increase x-risk.
I don’t think that’s a valid inference.
Also, Sam Altman is a pretty impressive guy. I wonder what would have happened if he had decided to try to stop humanity from building AGI, instead of trying to be the one to do it instead of google.