I agree to some extent, depending on how efficient advertising for a specific charity through a meta-charity is. I see what you’re saying now after re-reading it, to be honest I had only very briefly skimmed it last night/morning. Curious, do have any stats on how effective Intentional Insights is at gathering more money for these other charities than is given to them directly?
Also, how does In In decide whether something is mitigating existential risk? I’m not overly familiar with the topic but donations to “Against Malaria Foundation” and others mentioned don’t sound like the specific sort of charity I’m mostly interested in.
For the answer to the first—about effectiveness—see the two paragraphs from the paragraph starting with “For some.” It’s pretty hard to measure exact impact of marketing dollars, so the best equivalent is the combination of how widely read an article is, with specific evidence of its impact on an individual, a combination of quantitative and qualitative approaches. Thus, we can see that this article was widely shared, over 1K times, which means it was likely read by over 100K people. Moreover, the article is clearly impactful, as we can see from the specific comment of the person who was impacted, and his sway with others in his role as group leader. We can’t see the large numbers of people who were impacted but chose not to respond, of course.
For the answer to the second, donations to AMF don’t do that much to mitigate existential risk. However, getting people turned to Effective Altruism does, since then they become familiar with the topic of existential risk, which occupies a lot of attention, including MIRI among effective altruists.
The problem with selling existential risk to the broad audience is that honestly, they generally don’t buy it. It’s hard for them to connect emotionally to AI and other existential risk issues. Much easier to connect emotionally to GiveWell, etc. However, once they get into Effective Altruism, they learn about existential risk, and are more oriented toward donating to MIRI, etc.
This is the benefit of being strategic and long-term oriented—rational—about donating to InIn. Getting more people engaged with these issues will result in more good than one’s own direct donations to MIRI, I think. But obviously, that’s my perspective, otherwise I wouldn’t have started InIn and would have just donated directly to MIRI and other causes that I held important. It’s up to you to evaluate the evidence. One path that many donors who give to InIn choose to do is to spread your donations, giving some to InIn and some to MIRI. It’s up to you.
I’m in the middle of writing an essay due tomorrow morning so pardon the slightly off topic and short reply (I’ll get back to you on the other matters later) but I am particularly curious about one topic that comes up here a lot, as far as I can tell, on discussions of existential risk. The topic is the AI and its relations to existential risk. By the sounds of it I may hold an extremely unpopular opinion, while I acknowledge that the AI could pose an existential risk, my personal ideas (which I don’t have the time to discuss here or the points required to make a full post on the subject matter)is that an AI is probably our best bet at mitigating existential risk and maximizing the utility, security, and knowledge I previously mentioned. Does that put me at odds with the general consensus on the issue here?
I wouldn’t say your opinion is at odds with many here. Many hold unfriendly AI to be the biggest existential risk, and friendly AI to be the best bet at mitigating existential risk. I think so as well. My personal opinion, based on my knowledge of the situation, is that real AI is at least 50 years off, and more likely on the scale of a century or more. We are facing much bigger short and medium-term existential risks, such as nuclear war, environmental disaster, etc. Helping people become more rational, which is the point of Intentional Insights, mitigates short, medium, and long-term existential risks alike :-)
I stated short and medium-term risks in that sentence. I have a 98% probability confidence that you are more than smart enough to understand that short-term risk applies to things like nuclear war more than environmental catastrophe, and are just trying to be annoying with your comment.
Not interesting in discussions of environmental disasters. I’ve been reading way too much about this with the new climate accord to want to have an LW-style discussion about it. I think we can both agree that there is significant likelihood of problems, such as major flooding of low-lying areas, in the next 20-30 years.
I think floods would only be one type of problem from climate change. Other would be extreme weather, such as hurricanes, tornadoes, etc. These would be quite destabilizing for a number of governments, and contribute to social unrest, which has unanticipated consequences. Even worse, at some point, we can face abrupt climate change.
Now, this is all probabilistic, and I’m not saying it will necessarily happen, but this is a super-short version of why I consider climate change an X-risk.
I just gave one example of the kind of environmental problem quite likely to occur within the medium-term. There are many others. Like I said, not interested in discussing these :-)
Solar flares may be pretty bad now that we are so reliant on the power grid but they hardly are an existential risk, Yellowstone erupts about once every 800,000 years in average which is hardly short-term, and asteroid impacts large enough to worry about are even rarer than that.
It doesn’t mean that they can’t happen as in “probability equals zero”, but it does mean that the probability that they happen in any given decade is pretty much negligible.
Well, for that matter it also depends on what you can do about it, and I have no idea how we would go about preventing Yellowstone from erupting.
I remember a proposal about cooling down Yellowstone by putting a lake on top of it.
If you spent more money you can ram carbon nanofiber rods deep into the ground. If the rod is thick enough the lave shouldn’t do much damage to it and it can very effectively transport temperature to the top.
Maybe you get even electricity as a bonus for cooling down Yellowstone so the project would pay for itself.
I agree to some extent, depending on how efficient advertising for a specific charity through a meta-charity is. I see what you’re saying now after re-reading it, to be honest I had only very briefly skimmed it last night/morning. Curious, do have any stats on how effective Intentional Insights is at gathering more money for these other charities than is given to them directly?
Also, how does In In decide whether something is mitigating existential risk? I’m not overly familiar with the topic but donations to “Against Malaria Foundation” and others mentioned don’t sound like the specific sort of charity I’m mostly interested in.
Yup, both good questions.
For the answer to the first—about effectiveness—see the two paragraphs from the paragraph starting with “For some.” It’s pretty hard to measure exact impact of marketing dollars, so the best equivalent is the combination of how widely read an article is, with specific evidence of its impact on an individual, a combination of quantitative and qualitative approaches. Thus, we can see that this article was widely shared, over 1K times, which means it was likely read by over 100K people. Moreover, the article is clearly impactful, as we can see from the specific comment of the person who was impacted, and his sway with others in his role as group leader. We can’t see the large numbers of people who were impacted but chose not to respond, of course.
For the answer to the second, donations to AMF don’t do that much to mitigate existential risk. However, getting people turned to Effective Altruism does, since then they become familiar with the topic of existential risk, which occupies a lot of attention, including MIRI among effective altruists.
The problem with selling existential risk to the broad audience is that honestly, they generally don’t buy it. It’s hard for them to connect emotionally to AI and other existential risk issues. Much easier to connect emotionally to GiveWell, etc. However, once they get into Effective Altruism, they learn about existential risk, and are more oriented toward donating to MIRI, etc.
This is the benefit of being strategic and long-term oriented—rational—about donating to InIn. Getting more people engaged with these issues will result in more good than one’s own direct donations to MIRI, I think. But obviously, that’s my perspective, otherwise I wouldn’t have started InIn and would have just donated directly to MIRI and other causes that I held important. It’s up to you to evaluate the evidence. One path that many donors who give to InIn choose to do is to spread your donations, giving some to InIn and some to MIRI. It’s up to you.
I’m in the middle of writing an essay due tomorrow morning so pardon the slightly off topic and short reply (I’ll get back to you on the other matters later) but I am particularly curious about one topic that comes up here a lot, as far as I can tell, on discussions of existential risk. The topic is the AI and its relations to existential risk. By the sounds of it I may hold an extremely unpopular opinion, while I acknowledge that the AI could pose an existential risk, my personal ideas (which I don’t have the time to discuss here or the points required to make a full post on the subject matter)is that an AI is probably our best bet at mitigating existential risk and maximizing the utility, security, and knowledge I previously mentioned. Does that put me at odds with the general consensus on the issue here?
I wouldn’t say your opinion is at odds with many here. Many hold unfriendly AI to be the biggest existential risk, and friendly AI to be the best bet at mitigating existential risk. I think so as well. My personal opinion, based on my knowledge of the situation, is that real AI is at least 50 years off, and more likely on the scale of a century or more. We are facing much bigger short and medium-term existential risks, such as nuclear war, environmental disaster, etc. Helping people become more rational, which is the point of Intentional Insights, mitigates short, medium, and long-term existential risks alike :-)
Do we, now? Tell me about the short-term existential risk of an environmental disaster.
I stated short and medium-term risks in that sentence. I have a 98% probability confidence that you are more than smart enough to understand that short-term risk applies to things like nuclear war more than environmental catastrophe, and are just trying to be annoying with your comment.
You’re badly calibrated :-P
OK, tell me about the medium-term existential risk of an environmental disaster.
Lol, thanks for the calibration warning.
Not interesting in discussions of environmental disasters. I’ve been reading way too much about this with the new climate accord to want to have an LW-style discussion about it. I think we can both agree that there is significant likelihood of problems, such as major flooding of low-lying areas, in the next 20-30 years.
There were floods in the past that produced damage and likely some in the future but why do you believe it’s an Xrisk?
I think floods would only be one type of problem from climate change. Other would be extreme weather, such as hurricanes, tornadoes, etc. These would be quite destabilizing for a number of governments, and contribute to social unrest, which has unanticipated consequences. Even worse, at some point, we can face abrupt climate change.
Now, this is all probabilistic, and I’m not saying it will necessarily happen, but this is a super-short version of why I consider climate change an X-risk.
...magically transforms into...
Heh. So, “the sky is falling!” means “a chance of rain on Monday”?
I just gave one example of the kind of environmental problem quite likely to occur within the medium-term. There are many others. Like I said, not interested in discussing these :-)
Millions of Bangladeshis having to relocate (or build dykes) would indeed be a problem, but hardly an existential risk in the LWian sense of the term.
I replied to this point here
This is so nostalgic, this was what the GW alarmists were saying 20 years ago.
You still haven’t taken up the bet that you said you would
Yellowstone, strong solar flares and astroid impacts?
Solar flares may be pretty bad now that we are so reliant on the power grid but they hardly are an existential risk, Yellowstone erupts about once every 800,000 years in average which is hardly short-term, and asteroid impacts large enough to worry about are even rarer than that.
Rarity of events doesn’t mean they can’t happen in the short term.
It doesn’t mean that they can’t happen as in “probability equals zero”, but it does mean that the probability that they happen in any given decade is pretty much negligible.
Whether a probability is negligible dependends on the impact of an event and not only it’s probability.
Well, for that matter it also depends on what you can do about it, and I have no idea how we would go about preventing Yellowstone from erupting.
We might be able to reduce the harm it did, even if we couldn’t stop it erupting.
I remember a proposal about cooling down Yellowstone by putting a lake on top of it.
If you spent more money you can ram carbon nanofiber rods deep into the ground. If the rod is thick enough the lave shouldn’t do much damage to it and it can very effectively transport temperature to the top. Maybe you get even electricity as a bonus for cooling down Yellowstone so the project would pay for itself.