Of course the model “OAIs are extremely dangerous if not properly contained; let’s let everyone have one!” isn’t going to work. But there are many things we can try with an OAI (building a FAI, for instance), and most importantly, some of these things will be experimental (the FAI approach relies on getting the theory right, with no opportunity to test it). And there is a window that doesn’t exist with a genie—a window where people realise superintelligence is possible and where we might be able to get them to take safety seriously (and they’re not all dead). We might also be able to get exotica like a limited impact AI or something like that, if we can find safe ways of experimenting with OAIs.
And there seems no drawback to pushing an UFAI project into becoming an OAI project.
Cousin_it’s link is interesting, but it doesn’t seem to have anything to do with OAI, and instead looks like a possible method of directly building an FAI.
Of course the model “OAIs are extremely dangerous if not properly contained; let’s let everyone have one!” isn’t going to work.
Hmm, maybe I’m underestimating the amount of time it would take for OAI knowledge to spread, especially if the first OAI project is a military one (on the other hand, the military and their contractors don’t seem to be having better luck with network security than anyone else). How long do you expect the window of opportunity (i.e., the time from the first successful OAI to the first UFAI, assuming no FAI gets built in the mean time) to be?
some of these things will be experimental
I’d like to have FAI researchers determine what kind of experiments they want to do (if any, after doing appropriate benefit/risk analysis), which probably depends on the specific FAI approach they intend to use, and then build limited AIs (or non-AI constructs) to do the experiments. Building general Oracles that can answer arbitrary (or a wide range of) questions seems unnecessarily dangerous for this purpose, and may not help anyway depending on the FAI approach.
And there seems no drawback to pushing an UFAI project into becoming an OAI project.
There may be, if the right thing to do is to instead push them to not build an AGI at all.
One important fact I haven’t been mentioning: OAI help tremendously with medium speed takeoffs (fast takeoffs are dangerous for the usual reasons, slow takeoffs mean that we will have moved beyond OAIs by the time the intelligence level hits dangerous), because we can then use them to experiment.
There may be, if the right thing to do is to instead push them to not build an AGI at all.
Interacting with AGI people at the moment (organising a jointish conference), will have a clearer idea of how they react to these ideas at a later stage.
slow takeoffs mean that we will have moved beyond OAIs by the time the intelligence level hits dangerous
Moved where/how? Slow takeoff means we have more time, but I don’t see how it changes the nature of the problem. Low time to WBE makes (not particularly plausible) slow takeoff similar to the (moderately likely) failure to develop AGI before WBE.
Wot cousin_it said.
Of course the model “OAIs are extremely dangerous if not properly contained; let’s let everyone have one!” isn’t going to work. But there are many things we can try with an OAI (building a FAI, for instance), and most importantly, some of these things will be experimental (the FAI approach relies on getting the theory right, with no opportunity to test it). And there is a window that doesn’t exist with a genie—a window where people realise superintelligence is possible and where we might be able to get them to take safety seriously (and they’re not all dead). We might also be able to get exotica like a limited impact AI or something like that, if we can find safe ways of experimenting with OAIs.
And there seems no drawback to pushing an UFAI project into becoming an OAI project.
Cousin_it’s link is interesting, but it doesn’t seem to have anything to do with OAI, and instead looks like a possible method of directly building an FAI.
Hmm, maybe I’m underestimating the amount of time it would take for OAI knowledge to spread, especially if the first OAI project is a military one (on the other hand, the military and their contractors don’t seem to be having better luck with network security than anyone else). How long do you expect the window of opportunity (i.e., the time from the first successful OAI to the first UFAI, assuming no FAI gets built in the mean time) to be?
I’d like to have FAI researchers determine what kind of experiments they want to do (if any, after doing appropriate benefit/risk analysis), which probably depends on the specific FAI approach they intend to use, and then build limited AIs (or non-AI constructs) to do the experiments. Building general Oracles that can answer arbitrary (or a wide range of) questions seems unnecessarily dangerous for this purpose, and may not help anyway depending on the FAI approach.
There may be, if the right thing to do is to instead push them to not build an AGI at all.
One important fact I haven’t been mentioning: OAI help tremendously with medium speed takeoffs (fast takeoffs are dangerous for the usual reasons, slow takeoffs mean that we will have moved beyond OAIs by the time the intelligence level hits dangerous), because we can then use them to experiment.
Interacting with AGI people at the moment (organising a jointish conference), will have a clearer idea of how they react to these ideas at a later stage.
Moved where/how? Slow takeoff means we have more time, but I don’t see how it changes the nature of the problem. Low time to WBE makes (not particularly plausible) slow takeoff similar to the (moderately likely) failure to develop AGI before WBE.