We build what I called a modest superintelligence, consisting of one or more humans who are naturally extremely intelligent or who underwent intelligence amplification, they figure out how to build a stable world government and decide that it’s safer to do WBE and gradually increase human (em) intelligence than to build an FAI.
Safely and gradually enhancing human intelligence is hard. I agree that a team of human geniuses with unlimited time and resources could probably do it. But you need orders of magnitude more resources and thinking time than the fools “trying” to make UFAI.
A genetics project makes a lot of very smart babies, they find it hard to indoctrinate them, while educating them enough, while producing diversity. Militaristic bootcamp will get them all marching in line, and squash out most curiosity and give little room for skill. Handing them off to foster parents with stem backgrounds gets a bunch of smart people with no organizing control, this is a shift in demographics, you have no hope of capturing all the value. Some will work on AI safety, intelligence enhancement or whatever, some will work in all sorts of jobs.
Whole brain emulation seems possible, I question how to get it before someone makes UFAI, but its plausible we get that. If a group of smart coordinated people end up with the first functioning mind uploading, and the first nanomachines, and are fine with duplicating themselves a lot, and there are fast enough computers to let them think really fast, then that is enough for a decisive strategic advantage. If they upload everyone else into a simulation that doesn’t contain access to anything Turing complete (so no one can make UFAI within the simulation), then they could guide humanity towards a long term future without any superintelligence. They will probably figure out FAI eventually.
We build what I called a modest superintelligence, consisting of one or more humans who are naturally extremely intelligent or who underwent intelligence amplification, they figure out how to build a stable world government and decide that it’s safer to do WBE and gradually increase human (em) intelligence than to build an FAI.
Safely and gradually enhancing human intelligence is hard. I agree that a team of human geniuses with unlimited time and resources could probably do it. But you need orders of magnitude more resources and thinking time than the fools “trying” to make UFAI.
A genetics project makes a lot of very smart babies, they find it hard to indoctrinate them, while educating them enough, while producing diversity. Militaristic bootcamp will get them all marching in line, and squash out most curiosity and give little room for skill. Handing them off to foster parents with stem backgrounds gets a bunch of smart people with no organizing control, this is a shift in demographics, you have no hope of capturing all the value. Some will work on AI safety, intelligence enhancement or whatever, some will work in all sorts of jobs.
Whole brain emulation seems possible, I question how to get it before someone makes UFAI, but its plausible we get that. If a group of smart coordinated people end up with the first functioning mind uploading, and the first nanomachines, and are fine with duplicating themselves a lot, and there are fast enough computers to let them think really fast, then that is enough for a decisive strategic advantage. If they upload everyone else into a simulation that doesn’t contain access to anything Turing complete (so no one can make UFAI within the simulation), then they could guide humanity towards a long term future without any superintelligence. They will probably figure out FAI eventually.