If we’re being overseen then it seems true by definition that we’ll run into other AGIs if we build an FAI
This seems likely but is not true by definition. In fact if I were designing and overseer I can see reasons why I may prefer to design one that keeps itself hidden except where intervention is required. Such an overseer, upon detecting that the overseen have created an AI with an acceptable goal system, may actively destroy all evidence of its existence.
True, mea culpa. I swear, there’s something about the words “by definition” that makes you misuse them even if you’re already aware of how often they’re misused. I almost never say “by definition” and yet it still screwed me over.
This seems likely but is not true by definition. In fact if I were designing and overseer I can see reasons why I may prefer to design one that keeps itself hidden except where intervention is required. Such an overseer, upon detecting that the overseen have created an AI with an acceptable goal system, may actively destroy all evidence of its existence.
True, mea culpa. I swear, there’s something about the words “by definition” that makes you misuse them even if you’re already aware of how often they’re misused. I almost never say “by definition” and yet it still screwed me over.