probability of what exactly conditional on what exactly?
probability of what exactly conditional on what exactly?
define “require”
define “outweigh”
ETA: Since multiple people seem to find this comment objectionable for some reason I don’t understand, let me clarify a little. For 1 it would make some difference to my estimate whether we’re conditioning on literal halting of progress or just significant slowing, and things like how global the event needs to be. (This is a relatively minor ambiguity, but 90th percentiles can be pretty sensitive to such things.) For 2 it’s not clear to me whether it’s asking for the probability that a negative singularity happens conditional on nothing, or conditional on no disaster, or conditional on badly-done AI, or whether it’s asking for the probability that it’s possible that such a singularity will happen. All these would have strongly different answers. For 3 something similar. For 4 it’s not clear whether to interpret “require” as “it would be nice”, or “it would be the best use of marginal resources”, or “without it there’s essentially no chance of success”, or something else. For 5 “outweigh” could mean outweigh in probability or outweigh in marginal value of risk reduction, or outweigh in expected negative value, or something else.
P(superhuman intelligence within hours | human-level AI on supercomputer with Internet connection) = ?; P(superhuman intelligence within days | human-level AI on supercomputer with Internet connection) = ?; P(superhuman intelligence within < 5 years | human-level AI on supercomputer with Internet connection) = ?
How much money does the SIAI currently (this year) require (to be instrumental in maximizing your personal long-term goals, e.g. survive the Singularity by solving friendly AI), less/no more/little more/much more/vastly more?
What existential risk is currently most likely to have the greatest negative impact on your personal long-term goals, under the condition that nothing is done to mitigate the risk?
Can you think of any milestone such that if it were ever reached you would expect human‐level machine intelligence to be developed within five years thereafter?
define “global catastrophe halts progress”
probability of what exactly conditional on what exactly?
probability of what exactly conditional on what exactly?
define “require”
define “outweigh”
ETA: Since multiple people seem to find this comment objectionable for some reason I don’t understand, let me clarify a little. For 1 it would make some difference to my estimate whether we’re conditioning on literal halting of progress or just significant slowing, and things like how global the event needs to be. (This is a relatively minor ambiguity, but 90th percentiles can be pretty sensitive to such things.) For 2 it’s not clear to me whether it’s asking for the probability that a negative singularity happens conditional on nothing, or conditional on no disaster, or conditional on badly-done AI, or whether it’s asking for the probability that it’s possible that such a singularity will happen. All these would have strongly different answers. For 3 something similar. For 4 it’s not clear whether to interpret “require” as “it would be nice”, or “it would be the best use of marginal resources”, or “without it there’s essentially no chance of success”, or something else. For 5 “outweigh” could mean outweigh in probability or outweigh in marginal value of risk reduction, or outweigh in expected negative value, or something else.
P(human-level AI by ? (year) | no wars ∧ no natural disasters ∧ beneficially political and economic development) = 10%/50%/90%/0%
P(negative Singularity | badly done AI) = ?; P(extremely negative Singularity | badly done AI) = ? (where ‘negative’ = human extinction; ‘extremely negative’ = humans suffer;).
P(superhuman intelligence within hours | human-level AI on supercomputer with Internet connection) = ?; P(superhuman intelligence within days | human-level AI on supercomputer with Internet connection) = ?; P(superhuman intelligence within < 5 years | human-level AI on supercomputer with Internet connection) = ?
How much money does the SIAI currently (this year) require (to be instrumental in maximizing your personal long-term goals, e.g. survive the Singularity by solving friendly AI), less/no more/little more/much more/vastly more?
What existential risk is currently most likely to have the greatest negative impact on your personal long-term goals, under the condition that nothing is done to mitigate the risk?
Can you think of any milestone such that if it were ever reached you would expect human‐level machine intelligence to be developed within five years thereafter?