I’ve recently discovered lesswrong and love it! So first, let me thank you all for fostering such a wonderful community.
I’ve been reading a lot of the AI material and I find myself asking a question you all have surely considered, so I wanted to pose it.
If I believe that human beings are evolutionarily descended from apes, and I ask myself whether apes—if they had control over allowing human evolution to happen—should have allowed it or stopped it, I’m honestly not sure what the answer should be.
It seems like apes would in all likelihood be better off without humans around, so from the perspective of apes, they should probably have not allowed it to happen. However, looked at from a different frame of reference, like maybe what is good for the earth, or for the universe, then maybe the evolution of humans from apes was a good thing. Certainly from the perspective of humans, most of us would believe that it was allowed to happen was a good thing.
Do we find ourselves in a similar scenario with humans and AI? Are there benefits from other frames of reference besides humanity to allow the development of AI, even if that AI may pose existential threats to human civilization? And if so, are those perspectives being taken into full enough account when we think about AI risk assessment?
Is AI risk assessment too anthropocentric?
Hi everyone,
I’ve recently discovered lesswrong and love it! So first, let me thank you all for fostering such a wonderful community.
I’ve been reading a lot of the AI material and I find myself asking a question you all have surely considered, so I wanted to pose it.
If I believe that human beings are evolutionarily descended from apes, and I ask myself whether apes—if they had control over allowing human evolution to happen—should have allowed it or stopped it, I’m honestly not sure what the answer should be.
It seems like apes would in all likelihood be better off without humans around, so from the perspective of apes, they should probably have not allowed it to happen. However, looked at from a different frame of reference, like maybe what is good for the earth, or for the universe, then maybe the evolution of humans from apes was a good thing. Certainly from the perspective of humans, most of us would believe that it was allowed to happen was a good thing.
Do we find ourselves in a similar scenario with humans and AI? Are there benefits from other frames of reference besides humanity to allow the development of AI, even if that AI may pose existential threats to human civilization? And if so, are those perspectives being taken into full enough account when we think about AI risk assessment?