Roko would probably call “the most important century” work “building a stable equilibrium to land an AGI/ASI on”.
I broadly agree with you and Roko that this work is important and that it would often make more sense for people to do this kind of work than “narrowly-defined” technical AI safety.
In a comment to Roko’s post, I offered my classification of this “stable equilibrium” systems and work that should be done. Here I will reproduce it, with extra directions that appeared to me later:
Digital trust infrastructure: decentralised identity, secure communication (see Layers 1 and 2 in Trust Over IP Stack), proof-of-humanness, proof of AI (such as, a proof that such and such artifact is created with such and such agent, e.g., provided by OpenAI—watermarking failed, so need new robust solutions with zero-knowledge proofs).
Infrastructure for collective sensemaking and coordination: the infrastructure for communicating beliefs and counterfactuals, making commitments, imposing constraints on agent behaviour, and monitoring the compliance. We at Gaia Consortium are doing this.
Related to the previous item, in particular, to content authenticity: systems for personal data sovereignty (I don’t know any good examples besides Inrupt), dataset verification/authenticity more generally, dataset governance.
The science/ethics of consciousness and suffering mostly solved, and much more effort in biology to understand whom (or whose existence, joy, or non-suffering) the civilisation should value, to better inform the constraints and policy for the economic agents (which is monitored and verified through the infra from item 2.)
Accelerating enlightenment using AI teachers (Khanmigo, Quantum Leap) and other tools for individual epistemics (Ought) so that the people who participate in governance (the previous item) could do a better job.
The list above covers all the directions mentioned in the post, and there are a few more important ones.
Roko would probably call “the most important century” work “building a stable equilibrium to land an AGI/ASI on”.
I broadly agree with you and Roko that this work is important and that it would often make more sense for people to do this kind of work than “narrowly-defined” technical AI safety.
An aspect for why this may be the case that you didn’t mention is money: technical AI safety is probably bottlenecked on funding, but more of the “most important century/stable equilibrium” are more amenable to conventional VC funding, and the funders shouldn’t even be EA/AI x-risk/”most important century”-pilled.
In a comment to Roko’s post, I offered my classification of this “stable equilibrium” systems and work that should be done. Here I will reproduce it, with extra directions that appeared to me later:
Digital trust infrastructure: decentralised identity, secure communication (see Layers 1 and 2 in Trust Over IP Stack), proof-of-humanness, proof of AI (such as, a proof that such and such artifact is created with such and such agent, e.g., provided by OpenAI—watermarking failed, so need new robust solutions with zero-knowledge proofs).
Infrastructure for collective sensemaking and coordination: the infrastructure for communicating beliefs and counterfactuals, making commitments, imposing constraints on agent behaviour, and monitoring the compliance. We at Gaia Consortium are doing this.
Infrastructure and systems for collective epistemics: next-generation social networks (e.g., https://subconscious.network/), media, content authenticity, Jim Rutt’s “info agents” (he advises “three different projects that are working on this”).
Related to the previous item, in particular, to content authenticity: systems for personal data sovereignty (I don’t know any good examples besides Inrupt), dataset verification/authenticity more generally, dataset governance.
The science/ethics of consciousness and suffering mostly solved, and much more effort in biology to understand whom (or whose existence, joy, or non-suffering) the civilisation should value, to better inform the constraints and policy for the economic agents (which is monitored and verified through the infra from item 2.)
Systems for political decision-making and collective ethical deliberation: see Collective Intelligence Project, Policy Synth, simulated deliberative democracy. These types of systems should also be used for governing all of the above layers.
Accelerating enlightenment using AI teachers (Khanmigo, Quantum Leap) and other tools for individual epistemics (Ought) so that the people who participate in governance (the previous item) could do a better job.
The list above covers all the directions mentioned in the post, and there are a few more important ones.