Seeing the recent thread necromancy, it looks like this is a much more important question than I realized at the time, since it bears on AI-related existential risk.
The question, to summarize, was, “How exactly do you keep a good AGI prospect from being developed into unFriendly AGI, other than Security by Obscurity?”
My answer is that SbO (Security by Obscurity, not Antimony(II) Oxide, which doesn’t even exist) is not a solution here for the same reason it’s criticized everywhere else (which I assume is that it increases the probability of a rogue outsmarting mainstream researchers). Better to let the good guys be as well informed as the bad guys so they can deploy countermeasures (their own AGI) when the bad guys develop theirs.
But then, I haven’t researched this 24⁄7 for the last several years, so this may be too trite a dismissal.
Security By Obscurity: When you can’t be bothered to implement a real solution!(tm)
And what, pray tell, does a “real solution” look like?
Seeing the recent thread necromancy, it looks like this is a much more important question than I realized at the time, since it bears on AI-related existential risk.
The question, to summarize, was, “How exactly do you keep a good AGI prospect from being developed into unFriendly AGI, other than Security by Obscurity?”
My answer is that SbO (Security by Obscurity, not Antimony(II) Oxide, which doesn’t even exist) is not a solution here for the same reason it’s criticized everywhere else (which I assume is that it increases the probability of a rogue outsmarting mainstream researchers). Better to let the good guys be as well informed as the bad guys so they can deploy countermeasures (their own AGI) when the bad guys develop theirs.
But then, I haven’t researched this 24⁄7 for the last several years, so this may be too trite a dismissal.