Why do people spend much, much more time worrying about their retirement plans than the intelligence explosion if they are a similar distance in the future? I understand that people spend less time worrying about the intelligence explosion than what would be socially optimal because the vast majority of its benefits will be in the very far future, which people care little about. However, it seems probable that the intelligence explosion will still have a substantial effect on many people in the near-ish future (within the next 100 years). Yet, hardly anyone worries about it. Why?
First: Most people haven’t encountered the idea (note: watching Terminator does not constitute encountering the idea). Most who have have only a very hazy idea about it and haven’t given it serious thought.
Second: Suppose you decide that both pension savings and intelligence explosion have a real chance of making a difference to your future life. Which can you do more about? Well, you can adjust your future wealth considerably by changing how much you spend and how much you save, and the tradeoff between present and future is reasonably clear. What can you do to make it more likely that a future intelligence explosion will improve your life and less likely that it’ll make it worse? Personally, I can’t think of anything I can do that seems likely to have non-negligible impact, nor can I think of anything I can do for which I am confident about the sign of the impact they do have.
(Go and work for Google and hope to get on a team working on AI? Probably unachievable, not clear I could actually help, and who knows whether anything they produce will be friendly? Donate to MIRI? There’s awfully little evidence that anything they’re doing is actually going to be of any use, and if at some point they decide they should actually start building AI systems to experiment with their ideas, who knows?, they might be dangerous. Lobby for government-imposed AI safety regulations? Unlikely to succeed, and if it did it might turn out to impede carefully done AI research more than it impedes actually dangerous AI research, not least because it turns out that one can do AI research in more than one of the world’s countries. Try to build a friendly AI myself? Ha ha ha. Assassinate AI researchers? Aside from being illegal and immoral and dangerous, probably just as likely to stop someone having a crucial insight needed for friendly AI as to stop someone making something that will kill us all. Try to persuade other people to worry about unfriendly AI? OK, but they don’t have any more useful things to do about it than I do. Etc.)
Incidentally, do many people actually spend much time worrying about their retirement plans? (Note: this is not the same question as “do people worry about their retirement plans?” or “are people worried about their retirement plans?”.)
People could vote for government officials who have FAI research on their agenda, but currently, I think few if any politicians even know what FAI is. Why is that?
Because most people don’t agree that ‘it seems probable that the intelligence explosion will still have a substantial effect on many people in the near-ish future’.
Why do people spend much, much more time worrying about their retirement plans than the intelligence explosion if they are a similar distance in the future?
Why do you think they are in similar distance in the future? If you take the LW median of a likely arrival of the intelligence explosion that’s later than when most people are going to retire.
If you look at the general population most people consider the intelligence explosion even less likely.
It’s later, but, unless I am mistaken, the arrival of the intelligence explosion isn’t that much later than when most people will retire, so I don’t think that fully explains it.
Why do people spend much, much more time worrying about their retirement plans than the intelligence explosion if they are a similar distance in the future? I understand that people spend less time worrying about the intelligence explosion than what would be socially optimal because the vast majority of its benefits will be in the very far future, which people care little about. However, it seems probable that the intelligence explosion will still have a substantial effect on many people in the near-ish future (within the next 100 years). Yet, hardly anyone worries about it. Why?
First: Most people haven’t encountered the idea (note: watching Terminator does not constitute encountering the idea). Most who have have only a very hazy idea about it and haven’t given it serious thought.
Second: Suppose you decide that both pension savings and intelligence explosion have a real chance of making a difference to your future life. Which can you do more about? Well, you can adjust your future wealth considerably by changing how much you spend and how much you save, and the tradeoff between present and future is reasonably clear. What can you do to make it more likely that a future intelligence explosion will improve your life and less likely that it’ll make it worse? Personally, I can’t think of anything I can do that seems likely to have non-negligible impact, nor can I think of anything I can do for which I am confident about the sign of the impact they do have.
(Go and work for Google and hope to get on a team working on AI? Probably unachievable, not clear I could actually help, and who knows whether anything they produce will be friendly? Donate to MIRI? There’s awfully little evidence that anything they’re doing is actually going to be of any use, and if at some point they decide they should actually start building AI systems to experiment with their ideas, who knows?, they might be dangerous. Lobby for government-imposed AI safety regulations? Unlikely to succeed, and if it did it might turn out to impede carefully done AI research more than it impedes actually dangerous AI research, not least because it turns out that one can do AI research in more than one of the world’s countries. Try to build a friendly AI myself? Ha ha ha. Assassinate AI researchers? Aside from being illegal and immoral and dangerous, probably just as likely to stop someone having a crucial insight needed for friendly AI as to stop someone making something that will kill us all. Try to persuade other people to worry about unfriendly AI? OK, but they don’t have any more useful things to do about it than I do. Etc.)
Incidentally, do many people actually spend much time worrying about their retirement plans? (Note: this is not the same question as “do people worry about their retirement plans?” or “are people worried about their retirement plans?”.)
People could vote for government officials who have FAI research on their agenda, but currently, I think few if any politicians even know what FAI is. Why is that?
Because most people don’t agree that ‘it seems probable that the intelligence explosion will still have a substantial effect on many people in the near-ish future’.
Why do you think they are in similar distance in the future? If you take the LW median of a likely arrival of the intelligence explosion that’s later than when most people are going to retire.
If you look at the general population most people consider the intelligence explosion even less likely.
It’s later, but, unless I am mistaken, the arrival of the intelligence explosion isn’t that much later than when most people will retire, so I don’t think that fully explains it.
I think it’s often double. Retiring in 40 years and expecting the intelligence explosion in 80 years.
That sounds about right.