I checked their karma before replying, so I could taylor my answer to them if they were new. They have 1350 karma though, so I asume they are already familiar with us.
Same likely goes for the existential risk segment of EA. These are the only such discussion forums I’m aware of, but neither is x-risk only.
I’m a cryonaut from a few years back. I had deep philosophical differences to most of the arguments for AI Gods, which you may be able to determine from some of my recent discussions. I still think that it not completely crazy to try and create an beneficial AI God (taking into consideration my fallible hardware and all), but I put a lot more weight on futures where the future of intelligence is very important, but not as potent as a god.
Thanks for your pointers towards the EA segment, I wasn’t aware that there was a segment.
In that case, let me give a quick summary of what I know of that segment of effective altruism.
For context, there are basically 4 clusters. While many/most people concentrate on traditional human charities, some people think animal suffering matters more than 1/100th as much as a human suffering, and so think of animal charities are therefore more cost effective. Those are the first 2 clusters of ideas.
Then you have people who think that movement growth is more important, since organizations like Raising for Effective Giving have so far been able to move like $3/year (I forget) to effective charities for each dollar donated to them that year. Other organizations may have an even higher multiplier, but this is fairly controversial, because it’s difficult to measure future impact empirically, and it risks turning EA into a self-promoting machine which achieves nothing.
The 4^th category is basically weird future stuff. Mostly this is for people who think humans going extinct would be significantly worse than a mere 7 billion deaths would be. However, it’s not exclusively focused on existential risk. Unfortunately, we have no good ways of even evaluating how effective various anti-nuclear efforts are at actually reducing existential risk, and it’s even worse for efforts against prospective future technologies like AI. The best we can do is measure indirect effects. So the entire category is fairly controversial.
I would further divide the “weird future stuff” category into Global Catastrophic Risk/x-risk and non-GCR/x-risk stuff. For example, Brian Tomasik has coined the term s-risk for risks of astronomical future suffering. He makes a strong case for wild animals experiencing more net suffering than happiness, and so thinks that even without human extinction the next billion years are likely to be filled with astronomical amounts of animal suffering.
Within the GDR/x-risk half of the “weird future stuff” category, there appear to be maybe 4 or 5 causes I’m aware of. Nuclear war is the obvious one, along with climate change. I think most EAs tend to think climate change is important, but just not tractable enough to be a cost effective use of resources. The risk of another 1918 Flu pandemic, or of an engineered pandemic, comes up occasionally, especially with relation to the new CRISPR gene editing technology. AI is a big concern too, but more controversial, since it is more speculative. I’m not sure I’ve ever seen asteroid impacts or nanotechnology floated as a cost-effective means of reducing x-risk, but I don’t follow that closely, so perhaps there is some good discussion I’ve missed.
Much or most of the effort I’ve seen is to better understand the risks, so that we can better allocate resources in the future. Here are some organizations I know of which study existential risk, or are working to reduce it:
The Future of Humanity Institute at Oxford, and is led by Nick Bostrom. They primarily do scholarly research, and focus a good chunk of their attention on AI. There are now more academic papers published on human extinction than there are on dung beetles, largely due to their efforts to lead the charge.
Center for the Study of Existential Risk is out of Cambridge. I don’t know much about them, but they seem to be quite similar to FHI.
Future of Life Institute was founded by a bunch of people from MIT, but I don’t believe there is any official tie. They fund research too, but they seem to have a larger body of work directed at the general public. They give grants to researchers, and publish articles on a range of existential risks.
Perhaps there are discussion forums associated with these groups, but I’m unaware of them. There are a bunch of EA facebook groups, but they are mostly regional groups as far as I know. However, the EA forum and here are the closest things I know to what you’re after.
B612 Foundation is working on impact risks, by trying to get some IR cameras out to L2, L3 at least, and hopefully at S5. and Planetary Resources say that objects found with their IR cameras for mining, will go into the PDSS database.
I checked their karma before replying, so I could taylor my answer to them if they were new. They have 1350 karma though, so I asume they are already familiar with us.
Same likely goes for the existential risk segment of EA. These are the only such discussion forums I’m aware of, but neither is x-risk only.
I’m a cryonaut from a few years back. I had deep philosophical differences to most of the arguments for AI Gods, which you may be able to determine from some of my recent discussions. I still think that it not completely crazy to try and create an beneficial AI God (taking into consideration my fallible hardware and all), but I put a lot more weight on futures where the future of intelligence is very important, but not as potent as a god.
Thanks for your pointers towards the EA segment, I wasn’t aware that there was a segment.
In that case, let me give a quick summary of what I know of that segment of effective altruism.
For context, there are basically 4 clusters. While many/most people concentrate on traditional human charities, some people think animal suffering matters more than 1/100th as much as a human suffering, and so think of animal charities are therefore more cost effective. Those are the first 2 clusters of ideas.
Then you have people who think that movement growth is more important, since organizations like Raising for Effective Giving have so far been able to move like $3/year (I forget) to effective charities for each dollar donated to them that year. Other organizations may have an even higher multiplier, but this is fairly controversial, because it’s difficult to measure future impact empirically, and it risks turning EA into a self-promoting machine which achieves nothing.
The 4^th category is basically weird future stuff. Mostly this is for people who think humans going extinct would be significantly worse than a mere 7 billion deaths would be. However, it’s not exclusively focused on existential risk. Unfortunately, we have no good ways of even evaluating how effective various anti-nuclear efforts are at actually reducing existential risk, and it’s even worse for efforts against prospective future technologies like AI. The best we can do is measure indirect effects. So the entire category is fairly controversial.
I would further divide the “weird future stuff” category into Global Catastrophic Risk/x-risk and non-GCR/x-risk stuff. For example, Brian Tomasik has coined the term s-risk for risks of astronomical future suffering. He makes a strong case for wild animals experiencing more net suffering than happiness, and so thinks that even without human extinction the next billion years are likely to be filled with astronomical amounts of animal suffering.
Within the GDR/x-risk half of the “weird future stuff” category, there appear to be maybe 4 or 5 causes I’m aware of. Nuclear war is the obvious one, along with climate change. I think most EAs tend to think climate change is important, but just not tractable enough to be a cost effective use of resources. The risk of another 1918 Flu pandemic, or of an engineered pandemic, comes up occasionally, especially with relation to the new CRISPR gene editing technology. AI is a big concern too, but more controversial, since it is more speculative. I’m not sure I’ve ever seen asteroid impacts or nanotechnology floated as a cost-effective means of reducing x-risk, but I don’t follow that closely, so perhaps there is some good discussion I’ve missed.
Much or most of the effort I’ve seen is to better understand the risks, so that we can better allocate resources in the future. Here are some organizations I know of which study existential risk, or are working to reduce it:
The Future of Humanity Institute at Oxford, and is led by Nick Bostrom. They primarily do scholarly research, and focus a good chunk of their attention on AI. There are now more academic papers published on human extinction than there are on dung beetles, largely due to their efforts to lead the charge.
Center for the Study of Existential Risk is out of Cambridge. I don’t know much about them, but they seem to be quite similar to FHI.
Future of Life Institute was founded by a bunch of people from MIT, but I don’t believe there is any official tie. They fund research too, but they seem to have a larger body of work directed at the general public. They give grants to researchers, and publish articles on a range of existential risks.
Perhaps there are discussion forums associated with these groups, but I’m unaware of them. There are a bunch of EA facebook groups, but they are mostly regional groups as far as I know. However, the EA forum and here are the closest things I know to what you’re after.
B612 Foundation is working on impact risks, by trying to get some IR cameras out to L2, L3 at least, and hopefully at S5. and Planetary Resources say that objects found with their IR cameras for mining, will go into the PDSS database.
Thanks! I’ll get in touch with the EA community in a bit. I’ve got practical work to finish and I find forums too engaging.