If somehow international cooperation gives us a pause on going full AGI or at least no ASI—what then?
Just hope it never happens, like nuke wars?
The answer now is to set later generations up to be more able.
This could mean doing fundamental research (whether in AI alignment or international game theory or something else), it could mean building institutions to enable it, and it could mean making them actually smarter.
Genes might be the cheapest/easist way to affect marginal chances given the talent already involved in alignment and the amount of resources required to get involved politically or in building institutions
If somehow international cooperation gives us a pause on going full AGI or at least no ASI—what then?
Just hope it never happens, like nuke wars?
The answer is no, but this might have to happen under certain circumstances.
The usual case (assuming that the government bans or restricts compute resources, and/or limits algorithimic research), is to use this time to either let the government fund AI alignment research, or go for a direct project to make AIs that are safe to automate AI safety research, and given that we don’t have to race against other countries, we could afford far more safety taxes than usual to make AI safe.
I think the key crux is I don’t particularly think genetic editing is the cheapest/easiest way to affect marginal chances of doom, because of time lag plus needing to reorient the entire political system, which is not cheap, and the cheapest/easiest strategy to me to affect doom probabilities is to do preparatory AI alignment/control schemes such that we can safely hand off the bulk of the alignment work to the AIs, which then solve the alignment problem fully.
Your direction sounds great—but how well can $4M move the needle there? How well can genesmith move the needle with his time and energy?
I think you’re correct about the cheapest/easist strategy in general, but completely off in regards to marginal advantages.
Major labs will already be pouring massive amounts of money and human capital into direct AI alignment and using AIs to align AGI if we get to a freeze, and the further along in capabilities we get the more impactful such research would be.
Genesmith’s strategy benefits much more from starting now and has way less human talent and capital involved, hence higher marginal value
He already addressed this.
If somehow international cooperation gives us a pause on going full AGI or at least no ASI—what then?
Just hope it never happens, like nuke wars?
The answer now is to set later generations up to be more able.
This could mean doing fundamental research (whether in AI alignment or international game theory or something else), it could mean building institutions to enable it, and it could mean making them actually smarter.
Genes might be the cheapest/easist way to affect marginal chances given the talent already involved in alignment and the amount of resources required to get involved politically or in building institutions
The answer is no, but this might have to happen under certain circumstances.
The usual case (assuming that the government bans or restricts compute resources, and/or limits algorithimic research), is to use this time to either let the government fund AI alignment research, or go for a direct project to make AIs that are safe to automate AI safety research, and given that we don’t have to race against other countries, we could afford far more safety taxes than usual to make AI safe.
I think the key crux is I don’t particularly think genetic editing is the cheapest/easiest way to affect marginal chances of doom, because of time lag plus needing to reorient the entire political system, which is not cheap, and the cheapest/easiest strategy to me to affect doom probabilities is to do preparatory AI alignment/control schemes such that we can safely hand off the bulk of the alignment work to the AIs, which then solve the alignment problem fully.
Your direction sounds great—but how well can $4M move the needle there? How well can genesmith move the needle with his time and energy?
I think you’re correct about the cheapest/easist strategy in general, but completely off in regards to marginal advantages.
Major labs will already be pouring massive amounts of money and human capital into direct AI alignment and using AIs to align AGI if we get to a freeze, and the further along in capabilities we get the more impactful such research would be.
Genesmith’s strategy benefits much more from starting now and has way less human talent and capital involved, hence higher marginal value