What excites me most about Eric’s position since I first learned of it is that it provides a framework for safer AI systems that we might otherwise build if we were trying to target AGI. From this perspective it’s valuable for setting policy and missions for AI-focused endeavors in such a way that we potentially delay the creation of AGI.
Although it might be argued that this is inevitable (last time I talked to Eric this was the impression that I got; he felt he was laying out some ideas that would happen anyway and was taking the time to explain why he thinks they will happen that way, rather than trying to nudge us towards a path), having it codified and publicized as a best course of action, it may serve on the margin to more encourage folks to work in a paradigm of doing AI develop with an eye towards incorporation in CAIS rather than as a stepping stone towards AGI. This is important because it will apply optimization pressure to ignore adding the things AGI would need since those may take extra time and cost, and if most of the short and medium term economic and academic benefits can be realized within the CAIS paradigm, then we will see a shift towards optimizing more for CAIS and less for AGI, which seems broadly beneficial from a safety standpoint because CAIS is less integrated and less agentic by design (at least for now; that might be a path from CAIS to AGI). Having this be common knowledge and the accepted paradigm of AI research is thus beneficial for pushing people away from incentive gradients that more directly lead to AGI, buying time for more safety research.
Given this, it’s probably worthwhile for folks well positioned to influence other researchers to be made better aware of this work, which might be something folks here can do if they have the ears of those people (or just are those people).
What excites me most about Eric’s position since I first learned of it is that it provides a framework for safer AI systems that we might otherwise build if we were trying to target AGI. From this perspective it’s valuable for setting policy and missions for AI-focused endeavors in such a way that we potentially delay the creation of AGI.
Although it might be argued that this is inevitable (last time I talked to Eric this was the impression that I got; he felt he was laying out some ideas that would happen anyway and was taking the time to explain why he thinks they will happen that way, rather than trying to nudge us towards a path), having it codified and publicized as a best course of action, it may serve on the margin to more encourage folks to work in a paradigm of doing AI develop with an eye towards incorporation in CAIS rather than as a stepping stone towards AGI. This is important because it will apply optimization pressure to ignore adding the things AGI would need since those may take extra time and cost, and if most of the short and medium term economic and academic benefits can be realized within the CAIS paradigm, then we will see a shift towards optimizing more for CAIS and less for AGI, which seems broadly beneficial from a safety standpoint because CAIS is less integrated and less agentic by design (at least for now; that might be a path from CAIS to AGI). Having this be common knowledge and the accepted paradigm of AI research is thus beneficial for pushing people away from incentive gradients that more directly lead to AGI, buying time for more safety research.
Given this, it’s probably worthwhile for folks well positioned to influence other researchers to be made better aware of this work, which might be something folks here can do if they have the ears of those people (or just are those people).