I think people should discount risk estimates fairly heavily when an organisation is based around doom mongering. For instance, The Singularity Institute, The Future of Humanity Institute and the Bulletin of the Atomic Scientists all seem pretty heavily oriented around doom. Such organisations initially attract those with high risk estimates, and they then actively try and “sell” their estimates to others.
Obtaining less biased estimates seems rather challenging. The end of the would would obviously be an unprecidented event.
The usual way of eliciting probability is with bets. However, with an apocalypse, this doesn’t work too well. Attempts to use bets have some seriousproblems.
One should read materials from the people in the organization from before it was formed and grant those extra credence depending on how much one suspects the organization has written its bottom line first.
Note however that that systematically fails to account for the selection bias whereby doom-mongering organisations arise from groups of individuals with high risk estimates.
In the case of Yudkowsky, he started out all: yay: Singularity—and was actively working on accelerating it:
Since then, Yudkowsky has become not just someone who predicts the Singularity, but a committed activist trying to speed its arrival. “My first allegiance is to the Singularity, not humanity,” he writes in one essay. “I don’t know what the Singularity will do with us. I don’t know whether Singularities upgrade mortal races, or disassemble us for spare atoms.… If it comes down to Us or Them, I’m with Them.”
This was written before he hit on the current doom-mongering scheme. According to your proposal, it appears that we should be assigning such writings extra credence—since they reflect the state of play before the financial motives crept in.
Yes, those writings were also free from financial motivation and less subject to the author’s feeling the need to justify them than currently produced ones. However, notice that other thoughts also before there was a financial motivation militate against them rather strongly.
An analogy: if someone wants a pet and begins by thinking that they would be happier with a cat than a dog, and writes why, and then thinks about it more and decides that no, they’d be happier with a dog, and writes why, and then gets a dog, and writes why that was the best decision at the time with the evidence available, and in fact getting a dog was actually the best choice, the first two sets of writings are much more free from this bias than the last set. The last set is valuable because it was written with the most information available and after the most thought. The second set is more valuable than the first set in this way. The first set is in no similar way more valuable than the second set.
As an aside, that article is awful. Most glaringly, he said:
I think people should discount risk estimates fairly heavily when an organisation is based around doom mongering. For instance, The Singularity Institute, The Future of Humanity Institute and the Bulletin of the Atomic Scientists all seem pretty heavily oriented around doom. Such organisations initially attract those with high risk estimates, and they then actively try and “sell” their estimates to others.
Obtaining less biased estimates seems rather challenging. The end of the would would obviously be an unprecidented event.
The usual way of eliciting probability is with bets. However, with an apocalypse, this doesn’t work too well. Attempts to use bets have some serious problems.
That’s why I refuse to join SIAI or FHI. If I did, I’d have to discount my own risk estimates, and I value my opinions too much for that. :)
One should read materials from the people in the organization from before it was formed and grant those extra credence depending on how much one suspects the organization has written its bottom line first.
Note however that that systematically fails to account for the selection bias whereby doom-mongering organisations arise from groups of individuals with high risk estimates.
In the case of Yudkowsky, he started out all: yay: Singularity—and was actively working on accelerating it:
http://www.wired.com/science/discoveries/news/2001/04/43080?currentPage=all
This was written before he hit on the current doom-mongering scheme. According to your proposal, it appears that we should be assigning such writings extra credence—since they reflect the state of play before the financial motives crept in.
Yes, those writings were also free from financial motivation and less subject to the author’s feeling the need to justify them than currently produced ones. However, notice that other thoughts also before there was a financial motivation militate against them rather strongly.
An analogy: if someone wants a pet and begins by thinking that they would be happier with a cat than a dog, and writes why, and then thinks about it more and decides that no, they’d be happier with a dog, and writes why, and then gets a dog, and writes why that was the best decision at the time with the evidence available, and in fact getting a dog was actually the best choice, the first two sets of writings are much more free from this bias than the last set. The last set is valuable because it was written with the most information available and after the most thought. The second set is more valuable than the first set in this way. The first set is in no similar way more valuable than the second set.
As an aside, that article is awful. Most glaringly, he said: