In the spirit of Situational Awareness, I’m curious how people are parsing some apparent contradictions:
OpenAI is explicitly pursuing AGI
Most/many people in the field (eg. Leopold Aschenbrenner, who worked with Ilya Sutskever) presume that (approximately) when AGI is reached, we’ll have automated software engineers and ASI will follow very soon
SSI is explicitly pursuing straight-shot superintelligence—the announcement starts off by claiming ASI is “within reach”
In his departing message from OpenAI, Sutskever said “I’m confident that OpenAI will build AGI that is both safe and beneficial...I am excited for what comes next—a project that is very personally meaningful to me about which I will share details in due time”
At the same time, Sam Altman said “I am forever grateful for what he did here and committed to finishing the mission we started together”
Does this point to increased likelihood of a timeline in which somehow OpenAI develops AGI before anyone else, and also SSI develops superintelligence before anyone else?
Does it seem at all likely from the announcement that by “straight-shot” SSI is strongly hinting that it aims to develop superintelligence while somehow sidestepping AGI (which they won’t release anyway) and automated software engineers?
Or is it all obviously just speculative talk/PR, not to be taken too literally, and we don’t really need to put much weight on the differences between AGI/ASI for now? Just seems like more unnecessary specificity than warranted, if that were the case.
In the spirit of Situational Awareness, I’m curious how people are parsing some apparent contradictions:
OpenAI is explicitly pursuing AGI
Most/many people in the field (eg. Leopold Aschenbrenner, who worked with Ilya Sutskever) presume that (approximately) when AGI is reached, we’ll have automated software engineers and ASI will follow very soon
SSI is explicitly pursuing straight-shot superintelligence—the announcement starts off by claiming ASI is “within reach”
In his departing message from OpenAI, Sutskever said “I’m confident that OpenAI will build AGI that is both safe and beneficial...I am excited for what comes next—a project that is very personally meaningful to me about which I will share details in due time”
At the same time, Sam Altman said “I am forever grateful for what he did here and committed to finishing the mission we started together”
Does this point to increased likelihood of a timeline in which somehow OpenAI develops AGI before anyone else, and also SSI develops superintelligence before anyone else?
Does it seem at all likely from the announcement that by “straight-shot” SSI is strongly hinting that it aims to develop superintelligence while somehow sidestepping AGI (which they won’t release anyway) and automated software engineers?
Or is it all obviously just speculative talk/PR, not to be taken too literally, and we don’t really need to put much weight on the differences between AGI/ASI for now? Just seems like more unnecessary specificity than warranted, if that were the case.