All the thousands of words you’ve written avoid confronting the main point, which is whether people should donate to SIAI. To answer this, we need four numbers:
It sounds as though you are assuming that the aim of “people” is to SAVE THE WORLD.
Do you really think that?!? Have you thought that through?!?
A cursory analysis—from the perspective of basic biology—predicts that most humans can be reasonably expected to be interested in sex, fashion, food, money and status—and concerned with THE END OF THE WORLD—not so much. That seems pretty consistent with the actual interests of most people.
So: are you talking about some tiny subset of all humans? If so, which tiny subset, and what are their presumed goals—since that matters.
A cursory analysis—from the perspective of basic biology—predicts that most humans can be reasonably expected to be interested in sex, fashion, food, money and status—and concerned with THE END OF THE WORLD—not so much. That seems pretty consistent with the actual interests of most people.
People can’t have sex, eat food, follow fashion, get money, or raise their status if the world ends. Unless you absolutely refuse to say that a person wants anything at all beyond what they know they want and say they want and frequently think about wanting, it’s a trivial inference that most people do not want the world to end, and, given the other things they want, should want to help prevent the world from ending if they can.
Human wants were shaped by evolution. The world has not ended yet—so THE END OF THE WORLD is probably a rather abstract concept for many humans. If you look at 2012, Armageddon and other movies, they are obviously interested in it a bit. Indeed, the concept of THE END OF THE WORLD probably acts as a relatively novel superstimulus to the paranoia circuitry of vulnerable humans—so some people may care about it a lot.
However, if you “follow the money” you will quickly see that lipstick is widely considered to be much more important.
I’m confused by this response. Did I say something to imply that humans can only have one aim at a time? I do think that almost all humans would agree that the world being saved is better than the world not being saved, but of course that competes for money and attention with all other goals, both altruistic and selfish. I happen to think that people ought to weight saving the world highly, but I didn’t say that in the post you’re replying to, I don’t think that people actually do weight saving the world highly, and I didn’t say that I think people do weight saving the world highly. All I said was that it’s important to compute order of magnitude figures before drawing conclusions about existential risk.
If people don’t value preventing THE END OF THE WORLD highly, then they have no reason for donating to organisations which are puportedly trying to prevent DOOM.
Since some people seem to think that preventing THE END OF THE WORLD is very important—while a great many other barely seem to think twice about the issue—any attempt to obtain public agreement on these utilities seems to itself be doomed.
I remember the majority of people in the US being afraid of nuclear war w/ the USSR. This was a rational fear, although I guess the actual reason most held it was their susceptibility to propaganda and mass hysteria.
This suggests to me that there’s a difficulty getting people to care about a particular risk until some critical mass is reached, after which the fear may even become excessive.
It sounds as though you are assuming that the aim of “people” is to SAVE THE WORLD.
Do you really think that?!? Have you thought that through?!?
A cursory analysis—from the perspective of basic biology—predicts that most humans can be reasonably expected to be interested in sex, fashion, food, money and status—and concerned with THE END OF THE WORLD—not so much. That seems pretty consistent with the actual interests of most people.
So: are you talking about some tiny subset of all humans? If so, which tiny subset, and what are their presumed goals—since that matters.
People can’t have sex, eat food, follow fashion, get money, or raise their status if the world ends. Unless you absolutely refuse to say that a person wants anything at all beyond what they know they want and say they want and frequently think about wanting, it’s a trivial inference that most people do not want the world to end, and, given the other things they want, should want to help prevent the world from ending if they can.
Human wants were shaped by evolution. The world has not ended yet—so THE END OF THE WORLD is probably a rather abstract concept for many humans. If you look at 2012, Armageddon and other movies, they are obviously interested in it a bit. Indeed, the concept of THE END OF THE WORLD probably acts as a relatively novel superstimulus to the paranoia circuitry of vulnerable humans—so some people may care about it a lot.
However, if you “follow the money” you will quickly see that lipstick is widely considered to be much more important.
I’m confused by this response. Did I say something to imply that humans can only have one aim at a time? I do think that almost all humans would agree that the world being saved is better than the world not being saved, but of course that competes for money and attention with all other goals, both altruistic and selfish. I happen to think that people ought to weight saving the world highly, but I didn’t say that in the post you’re replying to, I don’t think that people actually do weight saving the world highly, and I didn’t say that I think people do weight saving the world highly. All I said was that it’s important to compute order of magnitude figures before drawing conclusions about existential risk.
If people don’t value preventing THE END OF THE WORLD highly, then they have no reason for donating to organisations which are puportedly trying to prevent DOOM.
Since some people seem to think that preventing THE END OF THE WORLD is very important—while a great many other barely seem to think twice about the issue—any attempt to obtain public agreement on these utilities seems to itself be doomed.
I remember the majority of people in the US being afraid of nuclear war w/ the USSR. This was a rational fear, although I guess the actual reason most held it was their susceptibility to propaganda and mass hysteria.
This suggests to me that there’s a difficulty getting people to care about a particular risk until some critical mass is reached, after which the fear may even become excessive.