I haven’t read this post, I just wanted to speculate about the downvoting, in case it helps.
Assigning “zero” probability is an infinite amount of error. In practice you wouldn’t be able to compute the log error. More colloquially, you’re infinitely confident about something, which in practice and expectation can be described as being infinitely wrong. Being mistaken is inevitable at some point. If someone gives 100% or 0%, that’s associated with them being very bad at forecasting.
I expect a lot of the downvotes are people noticing that you gave it 0%, and that’s strong evidence you’re very uncalibrated as a forecaster. For what it’s worth, I’m in the highscores on Metaculus, and I’d interpret that signal the same way.
Skimming a couple seconds more, I suspect the overall essay’s writing style doesn’t really explain how the material changes our probability estimate. This makes the essay seem indistinguishable from confused/irrelevant arguments about the forecast. For example if I try skim reading the Conclusion section, I can’t even tell if the essay’s topics really change the probability that human jobs can be done by some computer for $25/hr or less (that’s the criteria from the original prize post).
I have no reason not to think you were being genuine, and you are obviously knowledgeable. I think a potential productive next step could be if you consulted someone with a forecasting track record, or read Philip Tetlock’s stuff. The community is probably reacting to red flags about calibration, and (possibly) a writing style that doesn’t make it clear how this updates the forecast.
Thanks! I guess I didn’t know the audience very well and I wanted to come up with an eye catching title. It was not meant to be literal. I should have gone with “approximately Zero” but I thought that was silly. Maybe I can try and change it.
That’s a really good idea, changing the title. You can also try adding a little paragraph in italics, as a brief little note for readers clarifying which proability you’re giving.
I haven’t read this post, I just wanted to speculate about the downvoting, in case it helps.
Assigning “zero” probability is an infinite amount of error. In practice you wouldn’t be able to compute the log error. More colloquially, you’re infinitely confident about something, which in practice and expectation can be described as being infinitely wrong. Being mistaken is inevitable at some point. If someone gives 100% or 0%, that’s associated with them being very bad at forecasting.
I expect a lot of the downvotes are people noticing that you gave it 0%, and that’s strong evidence you’re very uncalibrated as a forecaster. For what it’s worth, I’m in the highscores on Metaculus, and I’d interpret that signal the same way.
Skimming a couple seconds more, I suspect the overall essay’s writing style doesn’t really explain how the material changes our probability estimate. This makes the essay seem indistinguishable from confused/irrelevant arguments about the forecast. For example if I try skim reading the Conclusion section, I can’t even tell if the essay’s topics really change the probability that human jobs can be done by some computer for $25/hr or less (that’s the criteria from the original prize post).
I have no reason not to think you were being genuine, and you are obviously knowledgeable. I think a potential productive next step could be if you consulted someone with a forecasting track record, or read Philip Tetlock’s stuff. The community is probably reacting to red flags about calibration, and (possibly) a writing style that doesn’t make it clear how this updates the forecast.
Thanks! I guess I didn’t know the audience very well and I wanted to come up with an eye catching title. It was not meant to be literal. I should have gone with “approximately Zero” but I thought that was silly. Maybe I can try and change it.
That’s a really good idea, changing the title. You can also try adding a little paragraph in italics, as a brief little note for readers clarifying which proability you’re giving.
Thank you for changing it to be less clickbaity. Downvotes removed.
Also I was more focused on the sentence following the one where your quote comes from:
“This includes entirely AI-run companies, with AI managers and AI workers and everything being done by AIs.”
and “AGI will be developed by January 1, 2100”
I try and argue that the answer to these two proposals is approximately Zero.