My point, with this, is that everybody is risk-averse and everybody has a time preference. The less is known about the prospects of a future technology, the less willing people are to invest resources into ventures that depend on the future development of that technology. (Whether to take advantage of the technology—as in cryonics—or to mitigate its dangers—as in FAI.) Also, the farther in the future the technology is, the less people care about it; we’re not willing to spend much to achieve benefits or forestall risks in the far future.
I don’t think it’s reasonable to expect people to change these ordinary features of economic preference. If you’re going to ask people to chip in to your cause, and the time horizon is too far, or the uncertainty too high, they’re not going to want to spend their resources that way. And they’ll be justified.
Note: yes, there ought to be some magnitude of benefit or cost that overcomes both risk aversion and time preference. Maybe you’re going to argue that existential risk and cryonics are issues of such great magnitude that they outweigh both risk aversion and time preference.
But: first of all, the importance of the benefit or cost is also an unknown (and indeed subjective.) How much do you value being alive? And, second of all, nobody says our risk and time preferences are well-behaved. There may be a date so far in the future that I don’t care about anything that happens then, no matter how good or how bad. There may be loss aversion—an amount of money that I’m not willing to risk losing, no matter how good the upside. I’ve seen some experimental evidence that this is common.
My point, with this, is that everybody is risk-averse and everybody has a time preference.
From what I understand this applies to most people but not everyone, especially outside of contrived laboratory circumstances. Overconfidence and ambition essentially amount to risk-loving choices for some major life choices.
My point, with this, is that everybody is risk-averse and everybody has a time preference. The less is known about the prospects of a future technology, the less willing people are to invest resources into ventures that depend on the future development of that technology. (Whether to take advantage of the technology—as in cryonics—or to mitigate its dangers—as in FAI.) Also, the farther in the future the technology is, the less people care about it; we’re not willing to spend much to achieve benefits or forestall risks in the far future.
I don’t think it’s reasonable to expect people to change these ordinary features of economic preference. If you’re going to ask people to chip in to your cause, and the time horizon is too far, or the uncertainty too high, they’re not going to want to spend their resources that way. And they’ll be justified.
Note: yes, there ought to be some magnitude of benefit or cost that overcomes both risk aversion and time preference. Maybe you’re going to argue that existential risk and cryonics are issues of such great magnitude that they outweigh both risk aversion and time preference.
But: first of all, the importance of the benefit or cost is also an unknown (and indeed subjective.) How much do you value being alive? And, second of all, nobody says our risk and time preferences are well-behaved. There may be a date so far in the future that I don’t care about anything that happens then, no matter how good or how bad. There may be loss aversion—an amount of money that I’m not willing to risk losing, no matter how good the upside. I’ve seen some experimental evidence that this is common.
From what I understand this applies to most people but not everyone, especially outside of contrived laboratory circumstances. Overconfidence and ambition essentially amount to risk-loving choices for some major life choices.