The main technical crux: We think the main difficulty is not this level of capability, but the fact that this level of capability is beyond the ability to publish papers at conferences like NeurIPS, which we perceive as the threshold for Recursive self-improvement. So this plan demands robust global coordination to avoid foom. And model helping at alignment research seems much more easily attainable than the creation of this world model, so OpenAI’s plan may still be more realistic.
I’d call this the main (or one of the main) strategic crux. The main technical cruxes are the simulation feasibility, the feasibility of training a good policy from the sparse signal from the simulation if the latter is feasible, and the “political bargain” part, in particular the issue of representing nested, overlapping, and nebulous stake-holders/constituents (such as families, communities, societies, nations, ecosystems, etc.)
I’d call this the main (or one of the main) strategic crux. The main technical cruxes are the simulation feasibility, the feasibility of training a good policy from the sparse signal from the simulation if the latter is feasible, and the “political bargain” part, in particular the issue of representing nested, overlapping, and nebulous stake-holders/constituents (such as families, communities, societies, nations, ecosystems, etc.)