I’m not a biologist, but given that animal bodies exhibit principal-agent problems, e.g., auto-immune diseases and cancers, I suspect ant colonies (and large AI’s) would also have these problems.
Cancer is a case where an engineered genome could improve over an evolved one. We’ve managed to write software (for the most vital systems) that can copy without error, with such high probability that we expect never to see that part malfunction.
One reason that evolution hasn’t constructed sufficiently good error correction is that the most obvious way to do this makes the genome totally incapable of new mutations, which works great until the niche changes.
However, an AI-subagent would need to be able to adjust itself to unexpected conditions, and thus can’t simply rely on digital copying to prevent malfunctions.
So you agree that it’s possible in principle for a singleton AI to remain a singleton (provided it starts out alone in the cosmos), but you believe it would sacrifice significant adaptability and efficiency by doing so. Perhaps; I don’t know either way.
But the AI might make that sacrifice if it concludes that (eventually) losing singleton status would cost its values far more than the sacrifice is worth (e.g. if losing singleton status consigns the universe to a Hansonian hardscrapple race to burn the cosmic commons(pdf) rather than a continued time of plenty).
I believe it would at the very least have to sacrifice at least all adaptability by doing so, as in only sending out nodes with all instructions in ROM and instructions to periodically rest all non-ROM memory and shelf-destruct if it notices any failures of its triple redundancy ROM. As well as an extremely strong directive against anything that would let nodes store long term state.
I’m not a biologist, but given that animal bodies exhibit principal-agent problems, e.g., auto-immune diseases and cancers, I suspect ant colonies (and large AI’s) would also have these problems.
Cancer is a case where an engineered genome could improve over an evolved one. We’ve managed to write software (for the most vital systems) that can copy without error, with such high probability that we expect never to see that part malfunction.
One reason that evolution hasn’t constructed sufficiently good error correction is that the most obvious way to do this makes the genome totally incapable of new mutations, which works great until the niche changes.
However, an AI-subagent would need to be able to adjust itself to unexpected conditions, and thus can’t simply rely on digital copying to prevent malfunctions.
So you agree that it’s possible in principle for a singleton AI to remain a singleton (provided it starts out alone in the cosmos), but you believe it would sacrifice significant adaptability and efficiency by doing so. Perhaps; I don’t know either way.
But the AI might make that sacrifice if it concludes that (eventually) losing singleton status would cost its values far more than the sacrifice is worth (e.g. if losing singleton status consigns the universe to a Hansonian hardscrapple race to burn the cosmic commons(pdf) rather than a continued time of plenty).
I believe it would at the very least have to sacrifice at least all adaptability by doing so, as in only sending out nodes with all instructions in ROM and instructions to periodically rest all non-ROM memory and shelf-destruct if it notices any failures of its triple redundancy ROM. As well as an extremely strong directive against anything that would let nodes store long term state.
Remember, you’re the one trying to prove impossibility of a task here. Your inability to imagine a solution to the problem is only very weak evidence.