I suspect it’s not possible to build autonomous aligned AIs (low confidence). The best we can do is some type of hybrid humans-in-the-loop system. Such a system will be powerful enough to eventually give us everything we want, but it will also be much slower and intellectually inferior to what is possible with out humans-in-the-loop. I.e. the alignment tax will be enormous. The only way the safe system can compete, is by not building the unsafe system.
Therefore we need AI Governance. Fortunately, political action is getting a lot of attention right now, and the general public seems to be positively inclined to more cautious AI development.
After getting an immediate stop/paus on larger models, I think next step might be to use current AI to cure aging. I don’t want to miss the singularity because I died first, and I think I’m not the only one who feels this way. It’s much easier to be patient and cautious in a world where aging is a solved problem.
We probably need a strict ban on building autonomous superintelligent AI until we reached technological maturity. It’s probably not a great idea to build them after that either, but they will probably not pose the same risk any longer. This last claim is not at all obvious. The hardest attack vector to defend against would be manipulation. I think reaching technological maturity will make us able to defend against any military/hard-power attack. This includes for example having our own nano-bot defence system, to defend against hostile nanobots. Manipulation is harder, but I think there are ways to solve that, with enough time to set up our defences.
An important crux for what there end goal is, including if there is some stable end where we’re out of the danger, is to what extent technological maturity also leads to a stable cultural/political situation, or if that keeps evolving in ever new directions.
Todays thoughts:
I suspect it’s not possible to build autonomous aligned AIs (low confidence). The best we can do is some type of hybrid humans-in-the-loop system. Such a system will be powerful enough to eventually give us everything we want, but it will also be much slower and intellectually inferior to what is possible with out humans-in-the-loop. I.e. the alignment tax will be enormous. The only way the safe system can compete, is by not building the unsafe system.
Therefore we need AI Governance. Fortunately, political action is getting a lot of attention right now, and the general public seems to be positively inclined to more cautious AI development.
After getting an immediate stop/paus on larger models, I think next step might be to use current AI to cure aging. I don’t want to miss the singularity because I died first, and I think I’m not the only one who feels this way. It’s much easier to be patient and cautious in a world where aging is a solved problem.
We probably need a strict ban on building autonomous superintelligent AI until we reached technological maturity. It’s probably not a great idea to build them after that either, but they will probably not pose the same risk any longer. This last claim is not at all obvious. The hardest attack vector to defend against would be manipulation. I think reaching technological maturity will make us able to defend against any military/hard-power attack. This includes for example having our own nano-bot defence system, to defend against hostile nanobots. Manipulation is harder, but I think there are ways to solve that, with enough time to set up our defences.
An important crux for what there end goal is, including if there is some stable end where we’re out of the danger, is to what extent technological maturity also leads to a stable cultural/political situation, or if that keeps evolving in ever new directions.