This is an interesting idea that seems worth my looking into. Do you have sources, links, etc? It certainly could be helpful to draw attention to risk mitigation that is done for short term reasons, might be easier to get people to work on.
On the positive side, consider playing video games, an activity certainly carried out for short-term reasons, yet one of the major sources of funding for development of higher performance computers, an important ingredient in just about every kind of technological progress today.
Or consider how much research in medicine (another key long-term technology) is paid for by individual patients in the present day with the very immediate concern that they don’t want to suffer and die right now.
I don’t think lack of hardware progress is a major problem in avoiding existential disaster.
I read your post, but I don’t see a reason that a lack of understanding for certain past events should bring us to devalue our current best estimates for ways to reduce danger. I wouldn’t be remotely surprised if there are dangers we don’t (yet?) understand, but why presume an unknown danger isn’t localized in the same areas as known dangers? Keep in mind that reversed stupidity is not intelligence.
Because it has empirically turned out not to be. Reversed stupidity is not intelligence, but it is avoidance of stupidity. When we know a particular source gives wrong answers, that doesn’t tell us the right answers, but it does tell us what to avoid.
This is an interesting idea that seems worth my looking into. Do you have sources, links, etc? It certainly could be helpful to draw attention to risk mitigation that is done for short term reasons, might be easier to get people to work on.
I don’t have sources to hand, but here’s a post I wrote about the negative side: http://lesswrong.com/lw/10n/why_safety_is_not_safe/
On the positive side, consider playing video games, an activity certainly carried out for short-term reasons, yet one of the major sources of funding for development of higher performance computers, an important ingredient in just about every kind of technological progress today.
Or consider how much research in medicine (another key long-term technology) is paid for by individual patients in the present day with the very immediate concern that they don’t want to suffer and die right now.
I don’t think lack of hardware progress is a major problem in avoiding existential disaster.
I read your post, but I don’t see a reason that a lack of understanding for certain past events should bring us to devalue our current best estimates for ways to reduce danger. I wouldn’t be remotely surprised if there are dangers we don’t (yet?) understand, but why presume an unknown danger isn’t localized in the same areas as known dangers? Keep in mind that reversed stupidity is not intelligence.
Because it has empirically turned out not to be. Reversed stupidity is not intelligence, but it is avoidance of stupidity. When we know a particular source gives wrong answers, that doesn’t tell us the right answers, but it does tell us what to avoid.