Instead of worrying about existential risks, I could join a good startup team/start my own startup and aim to make lots of money.
However, what would happen if the world got through all of these challenges to a positive post singularity world? People would ask questions such as: “why did you not put effort into existential risk mitigation when you knew the dangers?” I would ask that question.
So the answer to the question “is it rational” depends upon your goals.
That’s not quite what I meant, but I do believe Eliezer has advised anyone not working on ‘risks’ (or FAI?) to make as much money as they can and contribute to an organization that is.
What I meant was that given a money pump, the straightforward thing to do is to pump it, not fix it in the hope that it will somehow benefit humanity.
It seems to me that on LW, most believe that people are irrational in ways that should make people money pumps, but the reaction to this is to make extreme efforts to persuade people of things.
Improving someone’s intelligence or rationality is difficult if they’re not already looking, but channeling away some of their funds or political capital will lessen the impact their irrationality can have.
most believe that people are irrational in ways that should make people money pumps
I think that most people are (unconsciously) rational enough to avoid being financially exploited in any but the obvious known ways (lotteries, advertising affecting preferences, etc) for which there’s already a competitive market or regulation. Aside from those avenues, most people are too wary of being cheated to make good money pumps.
What’s not being covered as much are the ways that people irrationally contribute to the destruction of wealth and utility, by voting for ineffective or detrimental policies or by spreading harmful memes. (People don’t typically have an evolved horror of doing these things.) A push towards rationality on those fronts can help everyone.
If you disagree, do you have a particularly effective money-pump in mind? (Of course you might hesitate to share it.)
That’s not quite what I meant, but I do believe Eliezer has advised anyone not working on ‘risks’ (or FAI?) to make as much money as they can and contribute to an organization that is.
So he’s chosen the “work the money pumps” option, then, rather than trying to correct them.
This is a good point.
Instead of worrying about existential risks, I could join a good startup team/start my own startup and aim to make lots of money.
However, what would happen if the world got through all of these challenges to a positive post singularity world? People would ask questions such as: “why did you not put effort into existential risk mitigation when you knew the dangers?” I would ask that question.
So the answer to the question “is it rational” depends upon your goals.
That’s not quite what I meant, but I do believe Eliezer has advised anyone not working on ‘risks’ (or FAI?) to make as much money as they can and contribute to an organization that is.
What I meant was that given a money pump, the straightforward thing to do is to pump it, not fix it in the hope that it will somehow benefit humanity.
It seems to me that on LW, most believe that people are irrational in ways that should make people money pumps, but the reaction to this is to make extreme efforts to persuade people of things.
Improving someone’s intelligence or rationality is difficult if they’re not already looking, but channeling away some of their funds or political capital will lessen the impact their irrationality can have.
I think that most people are (unconsciously) rational enough to avoid being financially exploited in any but the obvious known ways (lotteries, advertising affecting preferences, etc) for which there’s already a competitive market or regulation. Aside from those avenues, most people are too wary of being cheated to make good money pumps.
What’s not being covered as much are the ways that people irrationally contribute to the destruction of wealth and utility, by voting for ineffective or detrimental policies or by spreading harmful memes. (People don’t typically have an evolved horror of doing these things.) A push towards rationality on those fronts can help everyone.
If you disagree, do you have a particularly effective money-pump in mind? (Of course you might hesitate to share it.)
So he’s chosen the “work the money pumps” option, then, rather than trying to correct them.
If you believe that he believes that this is not the best use of that persons anti-risk time, then yes.
But I think he’s sincerely looking for the best way to mitigate those risks.