Question is, what can such primitive species like us could offer to AI.
Best I could come with is “predictability”. We people have relatively stable and “documented” “architecture”, so as long as civilization is build upon us, AI can more or less safely predict consequences of it’s actions and plan with high degree of certainty. But if this society collapses, destroyed, or is radically changed, AI will have to deal with a very unpredictable situation and with other AIs that who knows how would act in that situation.
We also need to uplift ourselves in terms of thermal performance. AI is going to be very picky about us not overheating stuff; but if we have a wide enough window of coprotective alignment, we can use that to upgrade our bodies to run orders of magnitude cooler while otherwise being the same. It will take a very strong, very aligned AI to pull off such a thing, but physics permits it.
in other words, we aren’t just offering them something in trade. In an agentically coprotective outcome, you’re not just trading objects, you’re trading valuing each other’s values. The AI takes on valuing humans intrinsically, and humans take on valuing the AI intrinsically.
Question is, what can such primitive species like us could offer to AI.
Best I could come with is “predictability”. We people have relatively stable and “documented” “architecture”, so as long as civilization is build upon us, AI can more or less safely predict consequences of it’s actions and plan with high degree of certainty. But if this society collapses, destroyed, or is radically changed, AI will have to deal with a very unpredictable situation and with other AIs that who knows how would act in that situation.
We need to become able to trade with ants.
We also need to uplift ourselves in terms of thermal performance. AI is going to be very picky about us not overheating stuff; but if we have a wide enough window of coprotective alignment, we can use that to upgrade our bodies to run orders of magnitude cooler while otherwise being the same. It will take a very strong, very aligned AI to pull off such a thing, but physics permits it.
in other words, we aren’t just offering them something in trade. In an agentically coprotective outcome, you’re not just trading objects, you’re trading valuing each other’s values. The AI takes on valuing humans intrinsically, and humans take on valuing the AI intrinsically.