You said that you updated and shortened your median timeline to 2047 and mode to 2035. But it seems to me that you need to shorten your timelines again.
“it seems very possible (>30%) that we are now in the crunch-time section of a short-timelines world, and that we have 3-7 years until Moore’s law and organizational prioritization put these systems at extremely dangerous levels of capability.”
It seems that the purpose of the bet was to test this hypothesis:
“we are offering to bet up to $1000 against the idea that we are in the “crunch-time section of a short-timelines”
My understanding is that if AI progress occurred slowly and no more than one of the advancements listed were made by 2026-01-01 then this short timelines hypothesis would be proven false and could then be ignored.
However, the bet was conceded on 2023-03-16 which is much earlier than the deadline and therefore the bet failed to prove the hypothesis false.
It seems to me that the rational action is to now update toward believing that this short timelines hypothesis is true and 3-7 years from 2022 is 2025-2029 which is substantially earlier than 2047.
It seems to me that the rational action is to now update toward believing that this short timelines hypothesis is true and 3-7 years from 2022 is 2025-2029 which is substantially earlier than 2047.
I don’t really agree, although it might come down to what you mean. When some people talk about their AGI timelines they often mean something much weaker than what I’m imagining, which can lead to significant confusion.
If your bar for AGI was “score very highly on college exams” then my median “AGI timelines” dropped from something like 2030 to 2025 over the last 2 years. Whereas if your bar was more like “radically transform the human condition”, I went from ~2070 to 2047.
I just see a lot of ways that we could have very impressive software programs and yet it still takes a lot of time to fundamentally transform the human condition, for example because of regulation, or because we experience setbacks due to war. My fundamental model hasn’t changed here, although I became substantially more impressed with current tech than I used to be.
(Actually, I think there’s a good chance that there will be no major delays at all and the human condition will be radically transformed some time in the 2030s. But because of the long list of possible delays, my overall distribution is skewed right. This means that even though my median is 2047, my mode is like 2034.)
You said that you updated and shortened your median timeline to 2047 and mode to 2035. But it seems to me that you need to shorten your timelines again.
In the It’s time for EA leadership to pull the short-timelines fire alarm post says:
It seems that the purpose of the bet was to test this hypothesis:
My understanding is that if AI progress occurred slowly and no more than one of the advancements listed were made by 2026-01-01 then this short timelines hypothesis would be proven false and could then be ignored.
However, the bet was conceded on 2023-03-16 which is much earlier than the deadline and therefore the bet failed to prove the hypothesis false.
It seems to me that the rational action is to now update toward believing that this short timelines hypothesis is true and 3-7 years from 2022 is 2025-2029 which is substantially earlier than 2047.
I don’t really agree, although it might come down to what you mean. When some people talk about their AGI timelines they often mean something much weaker than what I’m imagining, which can lead to significant confusion.
If your bar for AGI was “score very highly on college exams” then my median “AGI timelines” dropped from something like 2030 to 2025 over the last 2 years. Whereas if your bar was more like “radically transform the human condition”, I went from ~2070 to 2047.
I just see a lot of ways that we could have very impressive software programs and yet it still takes a lot of time to fundamentally transform the human condition, for example because of regulation, or because we experience setbacks due to war. My fundamental model hasn’t changed here, although I became substantially more impressed with current tech than I used to be.
(Actually, I think there’s a good chance that there will be no major delays at all and the human condition will be radically transformed some time in the 2030s. But because of the long list of possible delays, my overall distribution is skewed right. This means that even though my median is 2047, my mode is like 2034.)