I’m a bit confused by your response. First, the meat of the argument:
You are implicitly comparing two models: Mfast and Mslow, which make predictions about the world. Each model makes several claims, including the shape of the function governing AI improvement and about how the shape of that function comes about[1]. So far as I can tell, a typical central claim of people who endorse Mfast is that AIs working on themselves will allow their capabilities to grow hyper-exponentially. Those who endorse Mslow don’t seem to dispute that self-improvement will occur, but expect it to be par for the course of a new technology and continue to be well modeled by exponential growth.
So, it seems to me that the existence of recursive self-improvement without an observed fast takeoff is evidence against Mfast. I presume you disagree, but I don’t see how from a model selection framework.Mfast predicts either the data we observe now or a fast takeoff, whereas Mslow predicts only the exponential growth we are currently observing (do you disagree that we’re in a time of exponential growth?). By the laws of probability, Mslow places higher probability on the current data than Mfast. Due to Bayes’ rule, Mslow is therefore favored by the existing evidence (i.e. the Bayes factor indicates that you should update towards Mslow). Now, you might have a strong enough prior that you still favor Mfast, but if your model placed less probability mass on the current data than another model, you should update towards that other model.
Second (and lastly), a quibble:
Yitz’s response uses the terms hard/soft takeoff, was that edited? Otherwise your argument against “continuous” as opposed to slow or soft comes off as a non-sequitor; that you’re battling for terminological ground that isn’t even under contention.
Different people will have different versions of each of these models. Some may even oscillate between them as is convenient for argumentative purposes (a-la motte and bailey).
Note that those who endorse Mslow don’t think exponential growth will cut it; it’ll be much faster than that (in line with the long-term trends in human history, which are faster than exponential). I’m thinking of e.g. Paul Christiano and Ajeya Cotra here who I’m pretty sure agree growth has been and will continue to be superexponential (the recent trend of apparent exponential growth being an aberration).
My complaining about the term “continuous takeoff” was a response to Matthew Barnett and others’ usage of the term, not Yitz’, since as you say Yitz didn’t use it.
Anyhow, to the meat: None of the “hard takeoff people” or hard takeoff models predicted or would predict that the sorts of minor productivity advancements we are starting to see would lead to a FOOM by now. Ergo, it’s a mistake to conclude from our current lack of FOOM that those models made incorrect predictions.
None of the “hard takeoff people” or hard takeoff models predicted or would predict that the sorts of minor productivity advancements we are starting to see would lead to a FOOM by now.
The hard takeoff models predict that there will be less AI-caused productivity advancements before a FOOM than soft takeoff models. Therefore any AI-caused productivity advancements without FOOM are relative evidence against the hard takeoff models.
You might say that this evidence is pretty weak; but it feels hard to discount the evidence too much when there are few concrete claims by hard-takeoff proponents about what advances would surprise them. Everything is kinda prosaic in hindsight.
I’m not sure about that actually. Hard takeoff and soft takeoff disagree about the rate of slope change, not about the absolute height of the line. I guess if you are thinking about the “soft takeoff means shorter timelines” then yeah it also means higher AI progress prior to takeoff, and in particular predicts more stuff happening now. But people generally agree that despite that effect, the overall correlation between short timelines and fast takeoff is positive.
Anyhow, even if you are right, I definitely think the evidence is pretty weak. Both sides make pretty much the exact same retrodictions and were in fact equally unsurprised by the last few years. I agree that Yudkowsky deserves spanking for not working harder to make concrete predictions/bets with Paul, but he did work somewhat hard, and also it’s not like Paul, Ajeya, etc. are going around sticking their necks out much either. Finding concrete stuff to bet on (amongst this group of elite futurists) is hard. I speak from experience here, I’ve talked with Paul and Ajeya and tried to find things in the next 5 years we disagree on and it’s not easy, EVEN THOUGH I HAVE 5-YEAR TIMELINES. We spent about an hour probably. I agree we should do it more.
(Think about you vs. me. We both thought in detail about what our median futures look like. They were pretty similar, especially in the next 5 years!)
I’m a bit confused by your response. First, the meat of the argument:
You are implicitly comparing two models: Mfast and Mslow, which make predictions about the world. Each model makes several claims, including the shape of the function governing AI improvement and about how the shape of that function comes about[1]. So far as I can tell, a typical central claim of people who endorse Mfast is that AIs working on themselves will allow their capabilities to grow hyper-exponentially. Those who endorse Mslow don’t seem to dispute that self-improvement will occur, but expect it to be par for the course of a new technology and continue to be well modeled by exponential growth.
So, it seems to me that the existence of recursive self-improvement without an observed fast takeoff is evidence against Mfast. I presume you disagree, but I don’t see how from a model selection framework.Mfast predicts either the data we observe now or a fast takeoff, whereas Mslow predicts only the exponential growth we are currently observing (do you disagree that we’re in a time of exponential growth?). By the laws of probability, Mslow places higher probability on the current data than Mfast. Due to Bayes’ rule, Mslow is therefore favored by the existing evidence (i.e. the Bayes factor indicates that you should update towards Mslow). Now, you might have a strong enough prior that you still favor Mfast, but if your model placed less probability mass on the current data than another model, you should update towards that other model.
Second (and lastly), a quibble:
Yitz’s response uses the terms hard/soft takeoff, was that edited? Otherwise your argument against “continuous” as opposed to slow or soft comes off as a non-sequitor; that you’re battling for terminological ground that isn’t even under contention.
Different people will have different versions of each of these models. Some may even oscillate between them as is convenient for argumentative purposes (a-la motte and bailey).
Note that those who endorse Mslow don’t think exponential growth will cut it; it’ll be much faster than that (in line with the long-term trends in human history, which are faster than exponential). I’m thinking of e.g. Paul Christiano and Ajeya Cotra here who I’m pretty sure agree growth has been and will continue to be superexponential (the recent trend of apparent exponential growth being an aberration).
My complaining about the term “continuous takeoff” was a response to Matthew Barnett and others’ usage of the term, not Yitz’, since as you say Yitz didn’t use it.
Anyhow, to the meat: None of the “hard takeoff people” or hard takeoff models predicted or would predict that the sorts of minor productivity advancements we are starting to see would lead to a FOOM by now. Ergo, it’s a mistake to conclude from our current lack of FOOM that those models made incorrect predictions.
The hard takeoff models predict that there will be less AI-caused productivity advancements before a FOOM than soft takeoff models. Therefore any AI-caused productivity advancements without FOOM are relative evidence against the hard takeoff models.
You might say that this evidence is pretty weak; but it feels hard to discount the evidence too much when there are few concrete claims by hard-takeoff proponents about what advances would surprise them. Everything is kinda prosaic in hindsight.
I’m not sure about that actually. Hard takeoff and soft takeoff disagree about the rate of slope change, not about the absolute height of the line. I guess if you are thinking about the “soft takeoff means shorter timelines” then yeah it also means higher AI progress prior to takeoff, and in particular predicts more stuff happening now. But people generally agree that despite that effect, the overall correlation between short timelines and fast takeoff is positive.
Anyhow, even if you are right, I definitely think the evidence is pretty weak. Both sides make pretty much the exact same retrodictions and were in fact equally unsurprised by the last few years. I agree that Yudkowsky deserves spanking for not working harder to make concrete predictions/bets with Paul, but he did work somewhat hard, and also it’s not like Paul, Ajeya, etc. are going around sticking their necks out much either. Finding concrete stuff to bet on (amongst this group of elite futurists) is hard. I speak from experience here, I’ve talked with Paul and Ajeya and tried to find things in the next 5 years we disagree on and it’s not easy, EVEN THOUGH I HAVE 5-YEAR TIMELINES. We spent about an hour probably. I agree we should do it more.
(Think about you vs. me. We both thought in detail about what our median futures look like. They were pretty similar, especially in the next 5 years!)