I think that another component of the trade-off might be the possible competition (from the analysis point of view, not from the acting point of view) between various seed AI, or various seed AI technologies: an AI which hides its capacities might receive less resources than another seed AI which does not.
Active competition would result if the AI seed realizes this fact and includes it in whatever strategical thinking it uses to compute its hiding, but seems less likely to me than the following.
Passive competition is akin to natural selection: any AI hiding its abilities (such “hiding” does not imply consciousness: an AI can be more powerful than its designers and users realize, the same as human breeders have always underestimated the intelligence of the animals that they use as tools and meat) will be in competition for human and computational resources with other AIs, and one not hiding its abilities has better odds at “staying on”.
Of course, the question (and argument here proposed) supposes that human voluntarily created the seed AI, and control its power, when the seed AI could appear as an involuntary side effect of technology (e.g. the toy AI project of many SciFi novels, starting with Orson Scott Card’s AI in “Ender’s game”) and be powered by surplus energy “stolen” from other processes. Then the dilemma between perceived and effective power is reduced to the dilemma between staying hidden or revealing itself, and to how many and which people (and to which extent).
I think that another component of the trade-off might be the possible competition (from the analysis point of view, not from the acting point of view) between various seed AI, or various seed AI technologies: an AI which hides its capacities might receive less resources than another seed AI which does not.
Active competition would result if the AI seed realizes this fact and includes it in whatever strategical thinking it uses to compute its hiding, but seems less likely to me than the following.
Passive competition is akin to natural selection: any AI hiding its abilities (such “hiding” does not imply consciousness: an AI can be more powerful than its designers and users realize, the same as human breeders have always underestimated the intelligence of the animals that they use as tools and meat) will be in competition for human and computational resources with other AIs, and one not hiding its abilities has better odds at “staying on”.
Of course, the question (and argument here proposed) supposes that human voluntarily created the seed AI, and control its power, when the seed AI could appear as an involuntary side effect of technology (e.g. the toy AI project of many SciFi novels, starting with Orson Scott Card’s AI in “Ender’s game”) and be powered by surplus energy “stolen” from other processes. Then the dilemma between perceived and effective power is reduced to the dilemma between staying hidden or revealing itself, and to how many and which people (and to which extent).