It takes as a starting point datscilly’s own prediction, i.e., the result of applying Laplace’s rule from the Dartmouth conference. This seems like the most straightfoward historical base rate / model to use, and on a meta-level I trust datscilly and I’ve worked with him before.
I then substract some probability from the beginning and move it towards the end because I think it’s unlikely we’ll get human parity in the next 5 years. In particular, even Daniel Kokotajlo, the most bullish among the other predictors puts his peak somewhere around 2025.
I then apply some smoothing.
My resulting distribution looks similar to the current aggregate (and this I noticed after building it)
Datscilly’s prediction:
My prediction:
The previous aggregate:
Something I don’t like about the other predictions are:
Not long enough tails. There have been AI winters before; there could be AI winters again. Shit happens.
Very spiky maximums. I get that specific models can provide sharp predictions, but the question seems hard enough that I’d expect there to be a large amount of model error. I’d also expect predictions which take into account multiple models to do better.
Not updating on other predictions. Some of the other forecasters seem to have one big idea, rather than multiple uncertainties.
Things that would change my mind:
At the five minute level:
Getting more information about Daniel Kokotajlo’s models. On a meta-level, learning that he is a superforecaster.
Some specific definitions of “human level”.
At the longer-discussion level:
Object level arguments about AI architectures
Some information about whether experts believe that current AI methods can lead to AGI.
Some object level arguments about Moore’s law. I.e., by which year does Moore’s law predict we’ll have much more computing power than the higher estimates for the human Brain?
I’m also uncertain about what probability to assign to AGI after 2100.
I’m not a superforecaster. I’d be happy to talk more about my models if you like. You may be interested to know that my prediction was based on aggregating various different models, and also that I did try to account for things usually taking longer than expected. I’m trying to arrange a conversation with Ben Pace, perhaps you could join. I could also send you a powerpoint I made, or we could video chat.
I can answer your question #3. There’s been some good work on the question recently by people at OpenPhil and AI Impacts.
Here is my own answer.
It takes as a starting point datscilly’s own prediction, i.e., the result of applying Laplace’s rule from the Dartmouth conference. This seems like the most straightfoward historical base rate / model to use, and on a meta-level I trust datscilly and I’ve worked with him before.
I then substract some probability from the beginning and move it towards the end because I think it’s unlikely we’ll get human parity in the next 5 years. In particular, even Daniel Kokotajlo, the most bullish among the other predictors puts his peak somewhere around 2025.
I then apply some smoothing.
My resulting distribution looks similar to the current aggregate (and this I noticed after building it)
Datscilly’s prediction:
My prediction:
The previous aggregate:
Something I don’t like about the other predictions are:
Not long enough tails. There have been AI winters before; there could be AI winters again. Shit happens.
Very spiky maximums. I get that specific models can provide sharp predictions, but the question seems hard enough that I’d expect there to be a large amount of model error. I’d also expect predictions which take into account multiple models to do better.
Not updating on other predictions. Some of the other forecasters seem to have one big idea, rather than multiple uncertainties.
Things that would change my mind:
At the five minute level:
Getting more information about Daniel Kokotajlo’s models. On a meta-level, learning that he is a superforecaster.
Some specific definitions of “human level”.
At the longer-discussion level:
Object level arguments about AI architectures
Some information about whether experts believe that current AI methods can lead to AGI.
Some object level arguments about Moore’s law. I.e., by which year does Moore’s law predict we’ll have much more computing power than the higher estimates for the human Brain?
I’m also uncertain about what probability to assign to AGI after 2100.
I might revisit this as time goes on.
I’m not a superforecaster. I’d be happy to talk more about my models if you like. You may be interested to know that my prediction was based on aggregating various different models, and also that I did try to account for things usually taking longer than expected. I’m trying to arrange a conversation with Ben Pace, perhaps you could join. I could also send you a powerpoint I made, or we could video chat.
I can answer your question #3. There’s been some good work on the question recently by people at OpenPhil and AI Impacts.