True. Also, the Great Filter is more akin to an existential catastrophe than to existential risk, that is, the risk of an existential catastrophe.
G0W51
Is there a term for a generalization of existential risk that includes the extinction of alien intelligences or the drastic decrease of their potential? Existential risk, that is, the extinction of Earth-originating intelligent life or the drastic decrease of its potential, does not sound nearly as harmful if there are alien civilizations that become sufficiently advanced in place of Earth-originating life. However, an existential risk sounds far more harmful if it compromises all intelligent life in the universe, or if there is no other intelligent life in the universe to begin with. Perhaps this would make physics experiments more concerning than other existential risks, as even if their chance of causing the extincion of Earth-originating life is much smaller than other existential risks, their chance of eliminating all life in the universe may be higher.
That sounds about right.
It’s later, but, unless I am mistaken, the arrival of the intelligence explosion isn’t that much later than when most people will retire, so I don’t think that fully explains it.
People could vote for government officials who have FAI research on their agenda, but currently, I think few if any politicians even know what FAI is. Why is that?
Why do people spend much, much more time worrying about their retirement plans than the intelligence explosion if they are a similar distance in the future? I understand that people spend less time worrying about the intelligence explosion than what would be socially optimal because the vast majority of its benefits will be in the very far future, which people care little about. However, it seems probable that the intelligence explosion will still have a substantial effect on many people in the near-ish future (within the next 100 years). Yet, hardly anyone worries about it. Why?
I would like to improve my instrumental rationality and improve my epistemic rationality as a means to do so. Currently, my main goal is to obtain useful knowledge (mainly in college) in order to obtain resources (mainly money). I’m not entirely sure what I want to do after that, but whatever it is, resources will probably be useful for it.
Improving my rationality. Are you looking for something more specific?
How much should you use LW, and how? Should you consistently read the articles on Main? What about discussion? What about the comments? Or should a more case-by-case system be used?
What exactly do you suggest using to invest, then?
Some parties may be more likely to accelerate scientific progress than others, and those who do could decrease existential risk by decreasing the time spent in high-risk states, for example the period when there are dangerous nano-technological weapons but other astronomical objects have not be colonized. This probably is not enough to justify voting, but I thought I would just let you know.
Presumably “employ the same strategy” should be interpreted loosely, as it seems problematic to give no consideration to agents who would use a slightly different allocation strategy.
Thanks for the idea. I will look into it.
What literature is available on who will be given moral consideration in a superintelligence’s coherent extrapolated volition (CEV) and how much weight each agent will be given?
Nick Bostrom’s Superintelligence mentions that it is an open problem as to whether AIs, non-human animals, currently deceased people, etc should be given moral consideration, and whether the values of those who aid in creating the superintelligence should be given more weight than that of others. However, Bostrom does not actually answer these questions, other than slightly advocating everyone being given equal weight in the CEV. The abstracts of other papers on CEV don’t mention this topic, so I am doubtful on the usefulness of reading their entireties.
(This is a repost.)
Is it okay to re-ask questions on open threads if they were not answered the last time the were ask on it? I had asked this question but received no answer, and I am concerened it would be spammy to re-ask.
My gut says that the performance in a vehicle collision will probably bring the head to a halt against a relatively immobile object, so the hat won’t do much of anything as the crushable bits are crushed too fast to be effective.
I don’t see how the latter clause follows from the former. You said that in the drop test, the impact reduction was roughly 25%. This isn’t huge, but I can’t say it “won’t do much of anything.” Were you thinking of something else to support your claim?
What literature is available on who will be given moral consideration in a superintelligence’s coherent extrapolated volition (CEV), and how much weight each agent will be given?
Nick Bostrom’s Superintelligence mentions that it is an open problem as to whether AIs, non-human animals, currently deceased people, etc should be given moral consideration, and whether the values of those who aid in creating the superintelligence should be given more weight than that of others. However, Bostrom does not actually answer these questions, other than slightly advocating everyone being given equal weight in the CEV. The abstracts of other papers on CEV don’t mention this topic, so I am doubtful on the usefulness of reading their entireties.
- 5 Oct 2015 6:54 UTC; 8 points) 's comment on Open thread, Oct. 5 - Oct. 11, 2015 by (
- 29 Sep 2015 3:41 UTC; 2 points) 's comment on Open thread, Sep. 28 - Oct. 4, 2015 by (
Don’t worry, I don’t mind math. Alas, I mainly have difficulty understanding why people act how they do, so I doubt mathematics will help much with that. I think I’m going to take the suggestion someone gave of reading more textbooks. A psychology course should also help.
Severity Index (I assume this is based on the head injury criterion?)
Actually, in an email they said the head of NOCSAE did the test, so presumably the NOCSAE Severity Index was used. An NOCSAE article says, “There is no measurable difference in safety of helmets with scores below the 1200 SI threshold.” So in other words, in the test the hats did not protect against any significant damage, because no significant damage was done even without the hat. Despite this, the webpage said said that, “The Crasche hat reduces the severity of blunt force impact by 94%.” I count this deceptive marketing as a strike against the product.
That said, given the low cost of purchasing and wearing the hat, it seems worthwhile for a transhumanist to purchase, simply due the vast gains to be had from a slight reduction in risk of death.
I don’t see how this would really help unless I am trying to do original research.
Panexistential risk is a good, intuitive, name.