I can invent an example, but then you can just say “okay, I wouldn’t use that specific system”.
But can’t you see, that’s entirely the point!
If you design systems whereby the Scary Idea has no more than a vanishing likelihood of occurring, it no longer becomes an active concern. It’s like saying “bridges won’t survive earthquakes! you are crazy and irresponsible to build a bridge in an area with earthquakes!” And then I design a bridge that can survive earthquakes smaller than magnitude X, where X magnitude earthquakes have a likelihood of occurring less than 1 in 10,000 years, then on top of that throw an extra safety margin of 20% on because we have the extra steel available. Now how crazy and irresponsible is it?
If you design systems whereby the Scary Idea has no more than a vanishing likelihood of occurring, it no longer becomes an active concern.
Yeah, and the whole problem is how specifically will you do it.
If I (or anyone else) will give you examples of what could go wrong, of course you can keep answering by “then I obviously wouldn’t use that design”. But at the end of the day, if you are going to build an AI, you have to make some design—just refusing designs given by other people will not do the job.
There are plenty of perfectly good designs out there, e.g. CogPrime + GOLUM. You could be calculating probabilistic risk based on these designs, rather than fear mongering based on a naïve Bayes net optimizer.
But can’t you see, that’s entirely the point!
If you design systems whereby the Scary Idea has no more than a vanishing likelihood of occurring, it no longer becomes an active concern. It’s like saying “bridges won’t survive earthquakes! you are crazy and irresponsible to build a bridge in an area with earthquakes!” And then I design a bridge that can survive earthquakes smaller than magnitude X, where X magnitude earthquakes have a likelihood of occurring less than 1 in 10,000 years, then on top of that throw an extra safety margin of 20% on because we have the extra steel available. Now how crazy and irresponsible is it?
Yeah, and the whole problem is how specifically will you do it.
If I (or anyone else) will give you examples of what could go wrong, of course you can keep answering by “then I obviously wouldn’t use that design”. But at the end of the day, if you are going to build an AI, you have to make some design—just refusing designs given by other people will not do the job.
There are plenty of perfectly good designs out there, e.g. CogPrime + GOLUM. You could be calculating probabilistic risk based on these designs, rather than fear mongering based on a naïve Bayes net optimizer.