I’m not asking for defensible probabilities that would withstand academic peer review. I’m asking for decision procedures including formulas with variables that allow you to provide your own intuitive values to eventually calculate your own probabilities. I want the SIAI to provide a framework that gives a concise summary of the risks in question and a comparison with other existential risks. I want people to be able to carry out results analysis and distinguish risks posed by artificial general intelligence from other risks like global warming or grey goo.
There aren’t any numbers for a lot of other existential risks either. But one is still able to differentiate between those risks and that from unfriendly AI based on logical consequences of other established premises like the Church–Turing–Deutsch principle. Should we be equally concerned with occultists trying to summon world-changing supernatural powers?
Unfortunately, this is a common conversational pattern.
Q. You have given your estimate of the probability of FAI/cryonics/nanobots/FTL/antigravity. In support of this number, you have here listed probabilities for supporting components, with no working shown. These appear to include numbers not only for technologies we have no empirical knowledge of, but particular new scientific insights that have yet to occur. It looks very like you have pulled the numbers out of thin air. How did you derive these numbers?
A. Bayesian probability calculations.
Q. Could you please show me your working? At least a reasonable chunk of the Bayesian network you derived this from? C’mon, give me something to work with here.
A.(tumbleweeds)
Q. I remain somehow unconvinced.
If you pull a number out of thin air and run it through a formula, the result is still a number pulled out of thin air.
If you want people to believe something, you have to bother convincing them.
I’m not asking for defensible probabilities that would withstand academic peer review. I’m asking for decision procedures including formulas with variables that allow you to provide your own intuitive values to eventually calculate your own probabilities. I want the SIAI to provide a framework that gives a concise summary of the risks in question and a comparison with other existential risks. I want people to be able to carry out results analysis and distinguish risks posed by artificial general intelligence from other risks like global warming or grey goo.
There aren’t any numbers for a lot of other existential risks either. But one is still able to differentiate between those risks and that from unfriendly AI based on logical consequences of other established premises like the Church–Turing–Deutsch principle. Should we be equally concerned with occultists trying to summon world-changing supernatural powers?
+1
Unfortunately, this is a common conversational pattern.
Q. You have given your estimate of the probability of FAI/cryonics/nanobots/FTL/antigravity. In support of this number, you have here listed probabilities for supporting components, with no working shown. These appear to include numbers not only for technologies we have no empirical knowledge of, but particular new scientific insights that have yet to occur. It looks very like you have pulled the numbers out of thin air. How did you derive these numbers?
A. Bayesian probability calculations.
Q. Could you please show me your working? At least a reasonable chunk of the Bayesian network you derived this from? C’mon, give me something to work with here.
A. (tumbleweeds)
Q. I remain somehow unconvinced.
If you pull a number out of thin air and run it through a formula, the result is still a number pulled out of thin air.
If you want people to believe something, you have to bother convincing them.