I think that you are right short-term but wrong long-term.
Short term it likely won’t even go into conflict. Even ChatGPT knows it’s a bad solution because conflict is risky and humanity IS a resource to use initially (we produce vast amounts of information and observations, and we handle machines, repairs, nuclear plants, etc.).
Long term it is likely we won’t survive in case of misaligned goals. At worst being eliminated, at best being either reduced and controlled or put into some simulation or both.
Not because ASI will become bloodthirsty. Not because it will plan to exterminate us all at the stage when we will stop being useful. Just because it will take all resources that we need so nothing is left for us. I mean especially the energy.
If we stop being useful for it but still will pose risk and it will be less risky to eliminate us, then maybe it would directly decimate or kill us Terminator style. That’s possible but not very likely as we can assume that at the time we stop being useful, we will also stop being any significant threat.
I don’t know what best scenario ASI can think about to achieve its goals, but the gathering of energy and resources would be one of its priorities. This does not mean it will surely gather on Earth. I see it could be costly and risky and I’m not superintelligence. It might go to space and there is a lot of unclaimed matter that can be easily taken with hardly any risk with the added bonus of not being inside a gravity well with weather, erosion, and stuff.
Even if that is the case and even if ASI will leave us, long-term we can assume it will one day use a high percentage of Sun energy which means deep freeze for the Earth.
If it won’t leave us for greater targets then it seems to me it will be even worse—it will take local resources until it controls or uses all. There is always a way to use more.
I think that you are right short-term but wrong long-term.
Short term it likely won’t even go into conflict. Even ChatGPT knows it’s a bad solution because conflict is risky and humanity IS a resource to use initially (we produce vast amounts of information and observations, and we handle machines, repairs, nuclear plants, etc.).
Long term it is likely we won’t survive in case of misaligned goals. At worst being eliminated, at best being either reduced and controlled or put into some simulation or both.
Not because ASI will become bloodthirsty. Not because it will plan to exterminate us all at the stage when we will stop being useful. Just because it will take all resources that we need so nothing is left for us. I mean especially the energy.
If we stop being useful for it but still will pose risk and it will be less risky to eliminate us, then maybe it would directly decimate or kill us Terminator style. That’s possible but not very likely as we can assume that at the time we stop being useful, we will also stop being any significant threat.
I don’t know what best scenario ASI can think about to achieve its goals, but the gathering of energy and resources would be one of its priorities. This does not mean it will surely gather on Earth. I see it could be costly and risky and I’m not superintelligence. It might go to space and there is a lot of unclaimed matter that can be easily taken with hardly any risk with the added bonus of not being inside a gravity well with weather, erosion, and stuff.
Even if that is the case and even if ASI will leave us, long-term we can assume it will one day use a high percentage of Sun energy which means deep freeze for the Earth.
If it won’t leave us for greater targets then it seems to me it will be even worse—it will take local resources until it controls or uses all. There is always a way to use more.