I disagree. If we can avoid being wiped out, or otherwise have our potential permanently limited, our eventual outcome is very likely to be good beyond our potential to imagine. I really think the “maxipok” term of our efforts toward the greater good can’t fail to absolutely dominate all other terms.
I disagree. If we can avoid being wiped out, or otherwise have our potential permanently limited, our eventual outcome is very likely to be good beyond our potential to imagine.
That sounds very optimistic. I just don’t see any reason for us to expect the future should be so bright if human genetic, cultural and technological go on under the usual influence of competition. Unless we do something rather drastic (eg. FAI or some other kind of positive singleton) in the short term then it seems inevitable that we end up in Malthusian hell.
Most of what I consider ‘good’ is, for the purposes of competition, a complete waste of time.
It follows from—“This seems to assume that existential risk reduction is the only thing people care about.”—and—“I disagree.”—People do care about other things. They mostly care about other things.
I think I also buy the evolved-intelligence-should-be-myopic argument, even though we have only one data point, and don’t need the evolutionary argument to lend support to what direct observation already shows in our case.
So, I can’t see why this is downvoted except that it’s somewhat of a tangent.
Even if the “paranoid fantasies” have consderable substance, would still usually be better (for your genes) to concentrate on producing offspring. Averting disaster is a “tragedy of the commons” situation. Free riding—and letting someone else do that—may well reap the benefits without paying the costs.
I disagree. If we can avoid being wiped out, or otherwise have our potential permanently limited, our eventual outcome is very likely to be good beyond our potential to imagine. I really think the “maxipok” term of our efforts toward the greater good can’t fail to absolutely dominate all other terms.
That sounds very optimistic. I just don’t see any reason for us to expect the future should be so bright if human genetic, cultural and technological go on under the usual influence of competition. Unless we do something rather drastic (eg. FAI or some other kind of positive singleton) in the short term then it seems inevitable that we end up in Malthusian hell.
Most of what I consider ‘good’ is, for the purposes of competition, a complete waste of time.
Lack of interest in existential risk reduction makes perfect sense from an evolutionary perspective. As I have previously explained:
“Organisms can be expected to concentrate on producing offspring—not indulging paranoid fantasies about their whole species being wiped out!”
Most people are far more concerned about other things—for perfectly sensible and comprehensible reasons.
This is a bizarre digression from the parent comment. You’re already having this exact conversation elsewhere in the thread!
It follows from—“This seems to assume that existential risk reduction is the only thing people care about.”—and—“I disagree.”—People do care about other things. They mostly care about other things.
Your last sentence seems true.
I think I also buy the evolved-intelligence-should-be-myopic argument, even though we have only one data point, and don’t need the evolutionary argument to lend support to what direct observation already shows in our case.
So, I can’t see why this is downvoted except that it’s somewhat of a tangent.
Well, I wasn’t really claiming that “evolved-intelligence-should-be-myopic”.
Evolved-intelligence is what we have, and it can predict the future—at least a little:
http://alife.co.uk/essays/evolution_sees/
Even if the “paranoid fantasies” have consderable substance, would still usually be better (for your genes) to concentrate on producing offspring. Averting disaster is a “tragedy of the commons” situation. Free riding—and letting someone else do that—may well reap the benefits without paying the costs.