It would by definition not be bad thing. “Bad thing” is a low-effort heuristic that is inappropriate here, since I interpret “bad” to mean that which is not good and good includes aggregate human desires which in this scenario has been defined to include a desire to be turned into paperclips.
The ideal scenario would be for humans and AIs to form a mutually beneficial relationship where the furtherance of human goals also furthers the goals of AIs. One potential way to accomplish would be to create a Neuralink-esque integrations of AI into human biology in such a way that human biology becomes an intrinsic requirement for future AI proliferation. If AGIs require living, healthy, happy humans in order to succeed, then they will ensure that humans are living, happy, and healthy.
One potential way to accomplish would be to create a Neuralink-esque integrations of AI into human biology in such a way that human biology becomes an intrinsic requirement for future AI proliferation. If AGIs require living, healthy, happy humans in order to succeed, then they will ensure that humans are living, happy, and healthy.
I don’t see how a Neuralink-like connection would solve the problem. If a superintelligent AI needs biological material for some reason, it can simply create it, or it could find a way to circumvent the links if they make it harder to reach its goal. In order for an AGI to “require” living, healthy, happy humans, this has to be part of its own goal or value system.
It would by definition not be bad thing. “Bad thing” is a low-effort heuristic that is inappropriate here, since I interpret “bad” to mean that which is not good and good includes aggregate human desires which in this scenario has been defined to include a desire to be turned into paperclips.
The ideal scenario would be for humans and AIs to form a mutually beneficial relationship where the furtherance of human goals also furthers the goals of AIs. One potential way to accomplish would be to create a Neuralink-esque integrations of AI into human biology in such a way that human biology becomes an intrinsic requirement for future AI proliferation. If AGIs require living, healthy, happy humans in order to succeed, then they will ensure that humans are living, happy, and healthy.
I don’t see how a Neuralink-like connection would solve the problem. If a superintelligent AI needs biological material for some reason, it can simply create it, or it could find a way to circumvent the links if they make it harder to reach its goal. In order for an AGI to “require” living, healthy, happy humans, this has to be part of its own goal or value system.