I’m worried about AGI safety, and I’m looking for non-AI people to worry with. Let me explain.
A lecture by futurist Anders Sandberg, online reading, and real-life discussions with my local Effective Altruist group, gave me as a non-AI person (33-yo physicist, engineer, climate activist and startup founder) the convictions that:
- AGI (Artificial General Intelligence, Superintelligence, or the Singularity) is a realistic possibility in the next decades, say between 2030 and 2050 - AGI could well become orders of magnitude smarter than humans, fast - If unaligned, AGI could well lead to human extinction - If aligned (‘safe’), AGI could still possibly lead to human extinction, for example because someone’s goals turned out to be faulty, or because someone removed the safety from the code
I’m active for two climate NGOs, where a lot of people are worrying about human extinction because of the climate crisis. I’m also worrying about this, but at the same time, I think the chance of human extincion due to AGI is much larger. Although the chance is much larger, I don’t believe it to be 100%: we could still stop AGI development, for example (I think that makes more sense than fleeing to Mars or working on a human-machine interface). Stopping development is a novel angle for many safe AI researchers, futurists, startup founders, and the like. However, many non-AI people think this is a very sensible solution, at least if all else fails. I agree with them. It is not going to be an easy goal to achieve and I see the penalty, but I think it makes the most sense from the options we have.
Therefore, I’m looking for non-AI people, who are interested to work with me on common sense solutions for existential risks posed by AGI.
[Question] Looking for non-AI people to work on AGI risks
I’m worried about AGI safety, and I’m looking for non-AI people to worry with. Let me explain.
A lecture by futurist Anders Sandberg, online reading, and real-life discussions with my local Effective Altruist group, gave me as a non-AI person (33-yo physicist, engineer, climate activist and startup founder) the convictions that:
- AGI (Artificial General Intelligence, Superintelligence, or the Singularity) is a realistic possibility in the next decades, say between 2030 and 2050
- AGI could well become orders of magnitude smarter than humans, fast
- If unaligned, AGI could well lead to human extinction
- If aligned (‘safe’), AGI could still possibly lead to human extinction, for example because someone’s goals turned out to be faulty, or because someone removed the safety from the code
I’m active for two climate NGOs, where a lot of people are worrying about human extinction because of the climate crisis. I’m also worrying about this, but at the same time, I think the chance of human extincion due to AGI is much larger. Although the chance is much larger, I don’t believe it to be 100%: we could still stop AGI development, for example (I think that makes more sense than fleeing to Mars or working on a human-machine interface). Stopping development is a novel angle for many safe AI researchers, futurists, startup founders, and the like. However, many non-AI people think this is a very sensible solution, at least if all else fails. I agree with them. It is not going to be an easy goal to achieve and I see the penalty, but I think it makes the most sense from the options we have.
Therefore, I’m looking for non-AI people, who are interested to work with me on common sense solutions for existential risks posed by AGI.
Does anyone know where to find them?