Note that those who endorse Mslow don’t think exponential growth will cut it; it’ll be much faster than that (in line with the long-term trends in human history, which are faster than exponential). I’m thinking of e.g. Paul Christiano and Ajeya Cotra here who I’m pretty sure agree growth has been and will continue to be superexponential (the recent trend of apparent exponential growth being an aberration).
My complaining about the term “continuous takeoff” was a response to Matthew Barnett and others’ usage of the term, not Yitz’, since as you say Yitz didn’t use it.
Anyhow, to the meat: None of the “hard takeoff people” or hard takeoff models predicted or would predict that the sorts of minor productivity advancements we are starting to see would lead to a FOOM by now. Ergo, it’s a mistake to conclude from our current lack of FOOM that those models made incorrect predictions.
None of the “hard takeoff people” or hard takeoff models predicted or would predict that the sorts of minor productivity advancements we are starting to see would lead to a FOOM by now.
The hard takeoff models predict that there will be less AI-caused productivity advancements before a FOOM than soft takeoff models. Therefore any AI-caused productivity advancements without FOOM are relative evidence against the hard takeoff models.
You might say that this evidence is pretty weak; but it feels hard to discount the evidence too much when there are few concrete claims by hard-takeoff proponents about what advances would surprise them. Everything is kinda prosaic in hindsight.
I’m not sure about that actually. Hard takeoff and soft takeoff disagree about the rate of slope change, not about the absolute height of the line. I guess if you are thinking about the “soft takeoff means shorter timelines” then yeah it also means higher AI progress prior to takeoff, and in particular predicts more stuff happening now. But people generally agree that despite that effect, the overall correlation between short timelines and fast takeoff is positive.
Anyhow, even if you are right, I definitely think the evidence is pretty weak. Both sides make pretty much the exact same retrodictions and were in fact equally unsurprised by the last few years. I agree that Yudkowsky deserves spanking for not working harder to make concrete predictions/bets with Paul, but he did work somewhat hard, and also it’s not like Paul, Ajeya, etc. are going around sticking their necks out much either. Finding concrete stuff to bet on (amongst this group of elite futurists) is hard. I speak from experience here, I’ve talked with Paul and Ajeya and tried to find things in the next 5 years we disagree on and it’s not easy, EVEN THOUGH I HAVE 5-YEAR TIMELINES. We spent about an hour probably. I agree we should do it more.
(Think about you vs. me. We both thought in detail about what our median futures look like. They were pretty similar, especially in the next 5 years!)
Note that those who endorse Mslow don’t think exponential growth will cut it; it’ll be much faster than that (in line with the long-term trends in human history, which are faster than exponential). I’m thinking of e.g. Paul Christiano and Ajeya Cotra here who I’m pretty sure agree growth has been and will continue to be superexponential (the recent trend of apparent exponential growth being an aberration).
My complaining about the term “continuous takeoff” was a response to Matthew Barnett and others’ usage of the term, not Yitz’, since as you say Yitz didn’t use it.
Anyhow, to the meat: None of the “hard takeoff people” or hard takeoff models predicted or would predict that the sorts of minor productivity advancements we are starting to see would lead to a FOOM by now. Ergo, it’s a mistake to conclude from our current lack of FOOM that those models made incorrect predictions.
The hard takeoff models predict that there will be less AI-caused productivity advancements before a FOOM than soft takeoff models. Therefore any AI-caused productivity advancements without FOOM are relative evidence against the hard takeoff models.
You might say that this evidence is pretty weak; but it feels hard to discount the evidence too much when there are few concrete claims by hard-takeoff proponents about what advances would surprise them. Everything is kinda prosaic in hindsight.
I’m not sure about that actually. Hard takeoff and soft takeoff disagree about the rate of slope change, not about the absolute height of the line. I guess if you are thinking about the “soft takeoff means shorter timelines” then yeah it also means higher AI progress prior to takeoff, and in particular predicts more stuff happening now. But people generally agree that despite that effect, the overall correlation between short timelines and fast takeoff is positive.
Anyhow, even if you are right, I definitely think the evidence is pretty weak. Both sides make pretty much the exact same retrodictions and were in fact equally unsurprised by the last few years. I agree that Yudkowsky deserves spanking for not working harder to make concrete predictions/bets with Paul, but he did work somewhat hard, and also it’s not like Paul, Ajeya, etc. are going around sticking their necks out much either. Finding concrete stuff to bet on (amongst this group of elite futurists) is hard. I speak from experience here, I’ve talked with Paul and Ajeya and tried to find things in the next 5 years we disagree on and it’s not easy, EVEN THOUGH I HAVE 5-YEAR TIMELINES. We spent about an hour probably. I agree we should do it more.
(Think about you vs. me. We both thought in detail about what our median futures look like. They were pretty similar, especially in the next 5 years!)