I suspect this is intentional, but the set {1,6,7,8} of predictions in redundant, in the sense that probabilities for three of them mathematically imply the probability of the forth due to the law of total probability.
In particular, if #1 is A and #6 is B, then #7 and #8 are A|B and A|¬B, and we have the equality
P(A)=P(A|B)P(B)+P(A|¬B)P(¬B)
The probability I would assign to #8 intuitively is about 0,41. Math based on my other three predictions yields (doing the calculation now) 0.476. I am going to predict the math output rather than my intuition.
Did anyone else calculate their level of inconsistency?
The probability I would assign to #8 intuitively is about 0,41. Math based on my other three predictions yields (doing the calculation now) 0.476. I am going to predict the math output rather than my intuition.
I think the correct response to this realization is not to revise your final answer so as to make it consistent with the first three. It is to revise all four answers so that they are maximally intuitive, subject to the constraint that they be jointly consistent. Which answer comes last is just an artifact of the order of presentation, so it isn’t a rational basis for privileging some answers over others.
This is only true if, for example, you think AI would cause GDP growth. My model assigns a lot of probability to ‘AI kills everyone before (human-relevant) GDP goes up that fast’, so questions #7 and #8 are conditional on me being wrong about that. If we can last any small multiples of a year with AI smart enough to double GDP in that timeframe, then things probably aren’t as bad as I thought.
I suspect this is intentional, but the set {1,6,7,8} of predictions in redundant, in the sense that probabilities for three of them mathematically imply the probability of the forth due to the law of total probability.
In particular, if #1 is A and #6 is B, then #7 and #8 are A|B and A|¬B, and we have the equality
P(A)=P(A|B)P(B)+P(A|¬B)P(¬B)
The probability I would assign to #8 intuitively is about 0,41. Math based on my other three predictions yields (doing the calculation now) 0.476. I am going to predict the math output rather than my intuition.
Did anyone else calculate their level of inconsistency?
I think the correct response to this realization is not to revise your final answer so as to make it consistent with the first three. It is to revise all four answers so that they are maximally intuitive, subject to the constraint that they be jointly consistent. Which answer comes last is just an artifact of the order of presentation, so it isn’t a rational basis for privileging some answers over others.
This is only true if, for example, you think AI would cause GDP growth. My model assigns a lot of probability to ‘AI kills everyone before (human-relevant) GDP goes up that fast’, so questions #7 and #8 are conditional on me being wrong about that. If we can last any small multiples of a year with AI smart enough to double GDP in that timeframe, then things probably aren’t as bad as I thought.