I’m someone who is moving in the opposite direction mainly (from AI to climate change). I see AGI as a lot harder to do than most, mainly due to the potential political ramifications causing slow development and thinking it will need experiments with novel hardware, so is more visible than just coding. So I see it as relatively easy to stop, at least inside a country. Multi-nationally would be trickier.
Some advise, I would try and frame your effort as “Understanding AGI risk”. While you think there is risk currently, having an open mind about the status of the risk is important. If AGI turns out to be existential risk-free then it could help with climate adaptation, even if it is not in time for climate mitigation.
Edit: You could frame it just as understanding AI, and put together independent briefs on each project for policy makers to understand the likely impacts both positive and negative and the state of play. Getting a good reputation and maintaining independence might be hard though.
Hi WH, thank you for the reply! I find it really heartening and encouraging to learn what others are thinking.
Could you explain what hardware you think would be needed? It’s kind of the first time I’m hearing someone talk about that, so I’m curious of course to learn what you think it would take.
I agree with your point that understanding risks of AI projects is a good way of framing things. Given the magnitude of AGI risks (as I understand it now, human extinction), an alarmist tone of a policy report would still be justified in my opinion. I also agree that we should keep an open mind: I see the benefits of AI, and even more the benefits of AGI, which would be biblical if we could control the risks. Climate adaptation could indeed be carried out a lot better, as could many other tasks. However, I think that we will not be able to control AGI, and we may therefore go extinct if we still develop it. But agreed: let’s keep an open mind about the developments.
Do you know any reliable overview of AGI risks? It would be great to have a kind of IPCC equivalent that’s as uncontroversial as possible to convince people that this problem needs attention. Or papers stating that there is a nonzero chance of human extinction, from a reliable source. Any such information would be great!
If I can help you by the way with ideas on how to fight the climate crisis, let me know!
Also another thought. (Partially) switching careers comes with a large penalty, since you don’t have as much previous knowledge, experience, credibility, and network for the new topic. The only reason I’m thinking about it, is that I think AGI risk is a lot more important to work on than climate risk. If you’re moving in the opposite direction:
1) Do you agree that such moving comes with a penalty?
2) Do you think that climate risk is a lot more important to work on than AGI risk?
If so, only one of us can be right. It would be nice to know who that is, so we don’t make silly choices.
I’m someone who is moving in the opposite direction mainly (from AI to climate change). I see AGI as a lot harder to do than most, mainly due to the potential political ramifications causing slow development and thinking it will need experiments with novel hardware, so is more visible than just coding. So I see it as relatively easy to stop, at least inside a country. Multi-nationally would be trickier.
Some advise, I would try and frame your effort as “Understanding AGI risk”. While you think there is risk currently, having an open mind about the status of the risk is important. If AGI turns out to be existential risk-free then it could help with climate adaptation, even if it is not in time for climate mitigation.
Edit: You could frame it just as understanding AI, and put together independent briefs on each project for policy makers to understand the likely impacts both positive and negative and the state of play. Getting a good reputation and maintaining independence might be hard though.
Hi WH, thank you for the reply! I find it really heartening and encouraging to learn what others are thinking.
Could you explain what hardware you think would be needed? It’s kind of the first time I’m hearing someone talk about that, so I’m curious of course to learn what you think it would take.
I agree with your point that understanding risks of AI projects is a good way of framing things. Given the magnitude of AGI risks (as I understand it now, human extinction), an alarmist tone of a policy report would still be justified in my opinion. I also agree that we should keep an open mind: I see the benefits of AI, and even more the benefits of AGI, which would be biblical if we could control the risks. Climate adaptation could indeed be carried out a lot better, as could many other tasks. However, I think that we will not be able to control AGI, and we may therefore go extinct if we still develop it. But agreed: let’s keep an open mind about the developments.
Do you know any reliable overview of AGI risks? It would be great to have a kind of IPCC equivalent that’s as uncontroversial as possible to convince people that this problem needs attention. Or papers stating that there is a nonzero chance of human extinction, from a reliable source. Any such information would be great!
If I can help you by the way with ideas on how to fight the climate crisis, let me know!
Also another thought. (Partially) switching careers comes with a large penalty, since you don’t have as much previous knowledge, experience, credibility, and network for the new topic. The only reason I’m thinking about it, is that I think AGI risk is a lot more important to work on than climate risk. If you’re moving in the opposite direction:
1) Do you agree that such moving comes with a penalty?
2) Do you think that climate risk is a lot more important to work on than AGI risk?
If so, only one of us can be right. It would be nice to know who that is, so we don’t make silly choices.