I would think that the sorts of hypotheticals that would be most useful to entertain would be ones that explore the safety of the most secure systems anyone would have an actual incentive to implement.
Could you contain a Strong AI running on a computer with no output systems, sealed in a lead box at the bottom of the ocean? Presumably yes, but in that case, you might as well skip the step of actually making the AI.
You say “presumably yes”. The whole point of this discussion is to listen to everyone who will say “obviously no”; their arguments would automatically apply to all weaker boxing techniques.
All the suggestions so far that might allow an AI without conventional outputs to get out would be overcome by the lead box+ocean defenses. I don’t think that containing a strong AI is likely to be that difficult a problem. The really difficult problem is containing a strong AI while getting anything useful out of it.
If we are not inventive enough to find a menace not obviously shielded by lead+ocean, more complex tasks like, say, actually designing FOOM-able AI is beyond us anyway…
I meant that designing a working FOOM-able AI (or non-FOOMable AGI, for that matter) is vastly harder than finding a few hypothetical hihg-risk scenarios.
I.e. walking the walk is harder than talking the talk.
I would think that the sorts of hypotheticals that would be most useful to entertain would be ones that explore the safety of the most secure systems anyone would have an actual incentive to implement.
Could you contain a Strong AI running on a computer with no output systems, sealed in a lead box at the bottom of the ocean? Presumably yes, but in that case, you might as well skip the step of actually making the AI.
You say “presumably yes”. The whole point of this discussion is to listen to everyone who will say “obviously no”; their arguments would automatically apply to all weaker boxing techniques.
All the suggestions so far that might allow an AI without conventional outputs to get out would be overcome by the lead box+ocean defenses. I don’t think that containing a strong AI is likely to be that difficult a problem. The really difficult problem is containing a strong AI while getting anything useful out of it.
If we are not inventive enough to find a menace not obviously shielded by lead+ocean, more complex tasks like, say, actually designing FOOM-able AI is beyond us anyway…
I… don’t believe that.
I think that making a FOOM-able AI is much easier than making an AI that can break out of a (considerably stronger) lead box in solar orbit.
And you are completely right.
I meant that designing a working FOOM-able AI (or non-FOOMable AGI, for that matter) is vastly harder than finding a few hypothetical hihg-risk scenarios.
I.e. walking the walk is harder than talking the talk.