If you design systems whereby the Scary Idea has no more than a vanishing likelihood of occurring, it no longer becomes an active concern.
Yeah, and the whole problem is how specifically will you do it.
If I (or anyone else) will give you examples of what could go wrong, of course you can keep answering by “then I obviously wouldn’t use that design”. But at the end of the day, if you are going to build an AI, you have to make some design—just refusing designs given by other people will not do the job.
There are plenty of perfectly good designs out there, e.g. CogPrime + GOLUM. You could be calculating probabilistic risk based on these designs, rather than fear mongering based on a naïve Bayes net optimizer.
Yeah, and the whole problem is how specifically will you do it.
If I (or anyone else) will give you examples of what could go wrong, of course you can keep answering by “then I obviously wouldn’t use that design”. But at the end of the day, if you are going to build an AI, you have to make some design—just refusing designs given by other people will not do the job.
There are plenty of perfectly good designs out there, e.g. CogPrime + GOLUM. You could be calculating probabilistic risk based on these designs, rather than fear mongering based on a naïve Bayes net optimizer.