I assume these are generally ~5 years old. Views may have shifted.
By default, I assume that the standard caveats for probabilities like these apply: I treat these as off-the-cuff ass numbers unless stated otherwise, products of ‘thinking about the problem on and off for years and then querying my gut about what it expects to actually see’, more so than of building Guesstimate models or trying to hard to make sure all the probabilities are perfectly coherent.
Inconsistencies are flags ‘something is wrong here’, but ass numbers are vague and unreliable enough that they’re to be expected to some degree. Similarly, ass numbers are often unstable hour-to-hour and day-to-day.
On my model, the point of ass numbers isn’t to demand perfection of your gut (e.g., of the sort that would be needed to avoid multiple-stage fallacies when trying to conditionalize a lot), but to:
Communicate with more precision than English-language words like ‘likely’ or ‘unlikely’ allow. Even very vague or uncertain numbers will, at least some of the time, be a better guide than natural-language terms that weren’t designed to cover the space of probabilities (and that can vary somewhat in meaning from person to person).
At least very vaguely and roughly bring your intuitions into contact with reality, and with each other, so you can more readily notice things like ‘I’m miscalibrated’, ‘reality went differently than I expected’, ‘these two probabilities don’t make sense together’, etc.
It may still be a terrible idea to spend too much time generating ass numbers, since “real numbers” are not the native format human brains compute probability with, and spending a lot of time working in a non-native format may skew your reasoning.
(Maybe there’s some individual variation here?)
But they’re at least a good tool to use sometimes, for the sake of crisper communication, calibration practice (so you can generate non-awful future probabilities when you need to), etc.
Collecting all of the quantitative AI predictions I know of MIRI leadership making on Arbital (let me know if I missed any):
Aligning an AGI adds significant development time: Eliezer 95%
Almost all real-world domains are rich: Eliezer 80%
Complexity of value: Eliezer 97%, Nate 97%
Distant superintelligences can coerce the most probable environment of your AI: Eliezer 66%
Meta-rules for (narrow) value learning are still unsolved: Eliezer 95%
Natural language understanding of “right” will yield normativity: Eliezer 10%
Relevant powerful agents will be highly optimized: Eliezer 75%
Some computations are people: Eliezer 99%, Nate 99%
Sufficiently optimized agents appear coherent: Eliezer 85%
Some caveats:
Arbital predictions range from 1% to 99%.
I assume these are generally ~5 years old. Views may have shifted.
By default, I assume that the standard caveats for probabilities like these apply: I treat these as off-the-cuff ass numbers unless stated otherwise, products of ‘thinking about the problem on and off for years and then querying my gut about what it expects to actually see’, more so than of building Guesstimate models or trying to hard to make sure all the probabilities are perfectly coherent.
Inconsistencies are flags ‘something is wrong here’, but ass numbers are vague and unreliable enough that they’re to be expected to some degree. Similarly, ass numbers are often unstable hour-to-hour and day-to-day.
On my model, the point of ass numbers isn’t to demand perfection of your gut (e.g., of the sort that would be needed to avoid multiple-stage fallacies when trying to conditionalize a lot), but to:
Communicate with more precision than English-language words like ‘likely’ or ‘unlikely’ allow. Even very vague or uncertain numbers will, at least some of the time, be a better guide than natural-language terms that weren’t designed to cover the space of probabilities (and that can vary somewhat in meaning from person to person).
At least very vaguely and roughly bring your intuitions into contact with reality, and with each other, so you can more readily notice things like ‘I’m miscalibrated’, ‘reality went differently than I expected’, ‘these two probabilities don’t make sense together’, etc.
It may still be a terrible idea to spend too much time generating ass numbers, since “real numbers” are not the native format human brains compute probability with, and spending a lot of time working in a non-native format may skew your reasoning.
(Maybe there’s some individual variation here?)
But they’re at least a good tool to use sometimes, for the sake of crisper communication, calibration practice (so you can generate non-awful future probabilities when you need to), etc.