That said, it seems the probability of a catastrophic AI takeover in humanity’s relative near-term future (say, the next 50 years) is low (maybe 10% chance of happening). However, it’s perhaps significantly more likely in the very long-run.
50 years seems like a strange unit of time from my perspective because due to the singularity time will accelerate massively from a subjective perspective. So 50 years might be more analogous to several thousand years historically. (Assuming serious takeoff starts within say 30 years and isn’t slowed down with heavy coordination.)
(I made separate comment making the same point. Just saw that you already wrote this, so moving the couple of references I had here to unify the discussion.)
If wars, revolutions, and expropriation events continue to happen at historically typical intervals, but on digital rather than biological timescales, then a normal human lifespan would require surviving an implausibly large number of upheavals; human security therefore requires the establishment of ultra-stable peace and socioeconomic protections.
There’s also a similar point made in the age of em, chapter 27:
This protection of human assets, however, may only last for as long as the em civilization remains stable. After all, the typical em may experience a subjective millennium in the time that ordinary humans experience 1 objective year, and it seems hard to offer much assurance that an em civilization will remain stable over 10s of 1000s of subjective em years.
I think the point you’re making here is roughly correct. I was being imprecise with my language. However, if my memory serves me right, I recall someone looking at a dataset of wars over time, and they said there didn’t seem to be much evidence that wars increased in frequency in response to economic growth. Thus, calendar time might actually be the better measure here.
(Pretty plausible you agree here, but just making the point for clarity.) I feel like the disanalogy due to AIs running at massive subjective speeds (e.g. probably >10x speed even prior to human obsolescence and way more extreme after that) means that the argument “wars don’t increase in frequence in response to economic growth” is pretty dubiously applicable. Economic growth hasn’t yet resulted in >10x faster subjective experience : ).
I’m not actually convinced that subjective speed is what matters. It seems like what matters more is how much computation is happening per unit of time, which seems highly related to economic growth, even in human economies (due to population growth).
I also think AIs might not think much faster than us. One plausible reason why you might think AIs will think much faster than us is because GPU clock-speeds are so high. But I think this is misleading. GPT-4 seems to “think” much slower than GPT-3.5, in the sense of processing fewer tokens per second. The trend here seems to be towards something resembling human subjective speeds. The reason for this trend seems to be that there’s a tradeoff between “thinking fast” and “thinking well” and it’s not clear why AIs would necessarily max-out the “thinking fast” parameter, at the expense of “thinking well”.
My core prediction is that AIs will be able to make pretty good judgements on core issues much, much faster. Then, due to diminishing returns on reasoning, decisions will overall be made much, much faster.
I agree the future AI economy will make more high-quality decisions per unit of time, in total, than the current human economy. But the “total rate of high quality decisions per unit of time” increased in the past with economic growth too, largely because of population growth. I don’t fully see the distinction you’re pointing to.
To be clear, I also agree AIs in the future will be smarter than us individually. But if that’s all you’re claiming, I still don’t see why we should expect wars to happen more frequently as we get individually smarter.
I mean, the “total rate of high quality decisions per year” would obviously increase in the case where we redefine 1 year to be 10 revolutions around the sun and indeed the number of wars per year would also increase. GDP per capita per year would also increase accordingly. My claim is that the situation looks much more like just literally speeding up time (while a bunch of other stuff is also happening).
Separately, I wouldn’t expect population size or technology-to-date to greatly increase the rate at high large scale stratege decisions are made so my model doesn’t make a very strong prediction here. (I could see an increase of several fold, but I could also imagine a decrease of several fold due to more people to coordinate. I’m not very confident about the exact change, but it would pretty surprising to me if it was as much as the per capita GDP increase which is more like 10-30x I think. E.g. consider meeting time which seems basically similar in practice throughout history.) And a change of perhaps 3x either way is overwhelmed by other variables which might effect the rate of wars so the realistic amount of evidence is tiny. (Also, there aren’t that many wars, so even if there weren’t possible confounders, the evidence is surely tiny due to noise.)
But, I’m claiming that the rates of cognition will increase more like 1000x which seems like a pretty different story. It’s plausible to me that other variables cancel this out or make the effect go the other way, but I’m extremely skeptical about the historical data providing much evidence in the way you’ve suggested. (Various specific mechanistic arguments about war being less plausible as you get smarter seem plausible to me, TBC.)
I mean, the “total rate of high quality decisions per year” would obviously increase in the case where we redefine 1 year to be 10 revolutions around the sun and indeed the number of wars per year would also increase. GDP per capita per year would also increase accordingly. My claim is that the situation looks much more like just literally speeding up time (while a bunch of other stuff is also happening).
[...]
But, I’m claiming that the rates of cognition will increase more like 1000x which seems like a pretty different story.
My question is: why will AI have the approximate effect of “speeding up calendar time”?
I speculated about three potential answers:
Because AIs will run at higher subjective speeds
Because AIs will accelerate economic growth.
Because AIs will speed up the rate at which high-quality decisions occur per unit of time
In case (1) the claim seems confused for two reasons.
First, I don’t agree with the intuition that subjective cognitive speeds matter a lot compared to the rate at which high-quality decisions are made, in terms of “how quickly stuff like wars should be expected to happen”. Intuitively, if an equally-populated society subjectively thought at 100x the rate we do, but each person in this society only makes a decision every 100 years (from our perspective), then you’d expect wars to happen less frequently per unit of time since there just isn’t much decision-making going on during most time intervals, despite their very fast subjective speeds.
Second, there is a tradeoff between “thinking speed” and “thinking quality”. There’s no fundamental reason, as far as I can tell, that the tradeoff favors running minds at speeds way faster than human subjective times. Indeed, GPT-4 seems to run significantly subjectively slower in terms of tokens processed per second compared to GPT-3.5. And there seems to be a broad trend here towards something resembling human subjective speeds.
In cases (2) and (3), I pointed out that it seemed like the frequency of war did not increase in the past, despite the fact that these variables had accelerated. In other words, despite an accelerated rate of economic growth, and an increased rate of total decision-making in the world in the past, war did not seem to become much more frequent over time.
Overall, I’m just not sure what you’d identify as the causal mechanism that would make AIs speed up the rate of war, and each causal pathway that I can identify seems either confused to me, or refuted directly by the (admittedly highly tentative) evidence I presented.
Second, there is a tradeoff between “thinking speed” and “thinking quality”. There’s no fundamental reason, as far as I can tell, that the tradeoff favors running minds at speeds way faster than human subjective times. Indeed, GPT-4 seems to run significantly subjectively slower in terms of tokens processed per second compared to GPT-3.5. And there seems to be a broad trend here towards something resembling human subjective speeds.
This reasoning seems extremely unlikely to hold deep into the singularity for any reasonable notion of subjective speed.
Deep in the singularity we expect economic doubling times of weeks. This will likely involve designing and building physical structures at extremely rapid speeds such that baseline processing will need to be way, way faster.
Are there any short-term predictions that your model makes here? For example do you expect tokens processed per second will start trending substantially up at some point in future multimodal models?
My main prediction would be that for various applications, people will considerably prefer models that generate tokens faster, including much faster than humans. And, there will be many applications where speed is prefered over quality.
I might try to think of some precise predictions later.
If the claim is about whether AI latency will be high for “various applications” then I agree. We already have some applications, such as integer arithmetic, where speed is optimized heavily, and computers can do it much faster than humans.
In context, it sounded like you were referring to tasks like automating a CEO, or physical construction work. In these cases, it seems likely to me that quality will be generally preferred over speed, and sequential processing times for AIs automating these tasks will not vastly exceed that of humans (more precisely, something like >2 OOM faster). Indeed, for some highly important tasks that future superintelligences automate, sequential processing times may even be lower for AIs compared to humans, because decision-making quality will just be that important.
I was refering to tasks like automating a CEO or construction work. I was just trying to think of the most relevant and easy to measure short term predictions (if there are already AI CEOs then the world is already pretty crazy).
The main thing here is that as models become more capable and general in the near-term future, I expect there will be intense demand for models that can solve ever larger and more complex problems. For these models, people will be willing to pay the costs of high latency, given the benefit of increased quality. We’ve already seen this in the way people prefer GPT-4 to GPT-3.5 in a large fraction of cases (for me, a majority of cases).
I expect this trend will continue into the foreseeable future until at least the period slightly after we’ve automated most human labor, and potentially into the very long-run too depending on physical constraints. I am not sufficiently educated about physical constraints here to predict what will happen “deep into the singularity”, but it’s important to note that physical constraints can cut both ways here.
To the extent that physics permits extremely useful models by virtue of them being very large and capable, you should expect people to optimize heavily for that despite the cost in terms of latency. By contrast, to the extent physics permits extremely useful models by virtue of them being very fast, then you should expect people to optimize heavily for that despite the cost in terms of quality. The balance that we strike here is not a simple function of how far we are from some abstract physical limit, but instead a function of how these physical constraints trade off against each other.
There is definitely a conceivable world in which the correct balance still favors much-faster-than-human-level latency, but it’s not clear to me that this is the world we actually live in. My intuitive, random speculative guess is that we live in the world where, for the most complex tasks that bottleneck important economic decision-making, people will optimize heavily for model quality at the cost of latency until settling on something within 1-2 OOMs of human-level latency.
Separately, current clock speeds don’t really matter on the time scale we’re discussing, physical limits matter. (Though current clock speeds do point at ways in which human subjective speed might be much slower than physical limits.)
50 years seems like a strange unit of time from my perspective because due to the singularity time will accelerate massively from a subjective perspective. So 50 years might be more analogous to several thousand years historically. (Assuming serious takeoff starts within say 30 years and isn’t slowed down with heavy coordination.)
(I made separate comment making the same point. Just saw that you already wrote this, so moving the couple of references I had here to unify the discussion.)
Point previously made in:
“security and stability” section of propositions concerning digital minds and society:
There’s also a similar point made in the age of em, chapter 27:
I think the point you’re making here is roughly correct. I was being imprecise with my language. However, if my memory serves me right, I recall someone looking at a dataset of wars over time, and they said there didn’t seem to be much evidence that wars increased in frequency in response to economic growth. Thus, calendar time might actually be the better measure here.
(Pretty plausible you agree here, but just making the point for clarity.) I feel like the disanalogy due to AIs running at massive subjective speeds (e.g. probably >10x speed even prior to human obsolescence and way more extreme after that) means that the argument “wars don’t increase in frequence in response to economic growth” is pretty dubiously applicable. Economic growth hasn’t yet resulted in >10x faster subjective experience : ).
I’m not actually convinced that subjective speed is what matters. It seems like what matters more is how much computation is happening per unit of time, which seems highly related to economic growth, even in human economies (due to population growth).
I also think AIs might not think much faster than us. One plausible reason why you might think AIs will think much faster than us is because GPU clock-speeds are so high. But I think this is misleading. GPT-4 seems to “think” much slower than GPT-3.5, in the sense of processing fewer tokens per second. The trend here seems to be towards something resembling human subjective speeds. The reason for this trend seems to be that there’s a tradeoff between “thinking fast” and “thinking well” and it’s not clear why AIs would necessarily max-out the “thinking fast” parameter, at the expense of “thinking well”.
My core prediction is that AIs will be able to make pretty good judgements on core issues much, much faster. Then, due to diminishing returns on reasoning, decisions will overall be made much, much faster.
I agree the future AI economy will make more high-quality decisions per unit of time, in total, than the current human economy. But the “total rate of high quality decisions per unit of time” increased in the past with economic growth too, largely because of population growth. I don’t fully see the distinction you’re pointing to.
To be clear, I also agree AIs in the future will be smarter than us individually. But if that’s all you’re claiming, I still don’t see why we should expect wars to happen more frequently as we get individually smarter.
I mean, the “total rate of high quality decisions per year” would obviously increase in the case where we redefine 1 year to be 10 revolutions around the sun and indeed the number of wars per year would also increase. GDP per capita per year would also increase accordingly. My claim is that the situation looks much more like just literally speeding up time (while a bunch of other stuff is also happening).
Separately, I wouldn’t expect population size or technology-to-date to greatly increase the rate at high large scale stratege decisions are made so my model doesn’t make a very strong prediction here. (I could see an increase of several fold, but I could also imagine a decrease of several fold due to more people to coordinate. I’m not very confident about the exact change, but it would pretty surprising to me if it was as much as the per capita GDP increase which is more like 10-30x I think. E.g. consider meeting time which seems basically similar in practice throughout history.) And a change of perhaps 3x either way is overwhelmed by other variables which might effect the rate of wars so the realistic amount of evidence is tiny. (Also, there aren’t that many wars, so even if there weren’t possible confounders, the evidence is surely tiny due to noise.)
But, I’m claiming that the rates of cognition will increase more like 1000x which seems like a pretty different story. It’s plausible to me that other variables cancel this out or make the effect go the other way, but I’m extremely skeptical about the historical data providing much evidence in the way you’ve suggested. (Various specific mechanistic arguments about war being less plausible as you get smarter seem plausible to me, TBC.)
My question is: why will AI have the approximate effect of “speeding up calendar time”?
I speculated about three potential answers:
Because AIs will run at higher subjective speeds
Because AIs will accelerate economic growth.
Because AIs will speed up the rate at which high-quality decisions occur per unit of time
In case (1) the claim seems confused for two reasons.
First, I don’t agree with the intuition that subjective cognitive speeds matter a lot compared to the rate at which high-quality decisions are made, in terms of “how quickly stuff like wars should be expected to happen”. Intuitively, if an equally-populated society subjectively thought at 100x the rate we do, but each person in this society only makes a decision every 100 years (from our perspective), then you’d expect wars to happen less frequently per unit of time since there just isn’t much decision-making going on during most time intervals, despite their very fast subjective speeds.
Second, there is a tradeoff between “thinking speed” and “thinking quality”. There’s no fundamental reason, as far as I can tell, that the tradeoff favors running minds at speeds way faster than human subjective times. Indeed, GPT-4 seems to run significantly subjectively slower in terms of tokens processed per second compared to GPT-3.5. And there seems to be a broad trend here towards something resembling human subjective speeds.
In cases (2) and (3), I pointed out that it seemed like the frequency of war did not increase in the past, despite the fact that these variables had accelerated. In other words, despite an accelerated rate of economic growth, and an increased rate of total decision-making in the world in the past, war did not seem to become much more frequent over time.
Overall, I’m just not sure what you’d identify as the causal mechanism that would make AIs speed up the rate of war, and each causal pathway that I can identify seems either confused to me, or refuted directly by the (admittedly highly tentative) evidence I presented.
Thanks for the clarification.
I think my main crux is:
This reasoning seems extremely unlikely to hold deep into the singularity for any reasonable notion of subjective speed.
Deep in the singularity we expect economic doubling times of weeks. This will likely involve designing and building physical structures at extremely rapid speeds such that baseline processing will need to be way, way faster.
See also Age of Em.
Are there any short-term predictions that your model makes here? For example do you expect tokens processed per second will start trending substantially up at some point in future multimodal models?
My main prediction would be that for various applications, people will considerably prefer models that generate tokens faster, including much faster than humans. And, there will be many applications where speed is prefered over quality.
I might try to think of some precise predictions later.
If the claim is about whether AI latency will be high for “various applications” then I agree. We already have some applications, such as integer arithmetic, where speed is optimized heavily, and computers can do it much faster than humans.
In context, it sounded like you were referring to tasks like automating a CEO, or physical construction work. In these cases, it seems likely to me that quality will be generally preferred over speed, and sequential processing times for AIs automating these tasks will not vastly exceed that of humans (more precisely, something like >2 OOM faster). Indeed, for some highly important tasks that future superintelligences automate, sequential processing times may even be lower for AIs compared to humans, because decision-making quality will just be that important.
I was refering to tasks like automating a CEO or construction work. I was just trying to think of the most relevant and easy to measure short term predictions (if there are already AI CEOs then the world is already pretty crazy).
The main thing here is that as models become more capable and general in the near-term future, I expect there will be intense demand for models that can solve ever larger and more complex problems. For these models, people will be willing to pay the costs of high latency, given the benefit of increased quality. We’ve already seen this in the way people prefer GPT-4 to GPT-3.5 in a large fraction of cases (for me, a majority of cases).
I expect this trend will continue into the foreseeable future until at least the period slightly after we’ve automated most human labor, and potentially into the very long-run too depending on physical constraints. I am not sufficiently educated about physical constraints here to predict what will happen “deep into the singularity”, but it’s important to note that physical constraints can cut both ways here.
To the extent that physics permits extremely useful models by virtue of them being very large and capable, you should expect people to optimize heavily for that despite the cost in terms of latency. By contrast, to the extent physics permits extremely useful models by virtue of them being very fast, then you should expect people to optimize heavily for that despite the cost in terms of quality. The balance that we strike here is not a simple function of how far we are from some abstract physical limit, but instead a function of how these physical constraints trade off against each other.
There is definitely a conceivable world in which the correct balance still favors much-faster-than-human-level latency, but it’s not clear to me that this is the world we actually live in. My intuitive, random speculative guess is that we live in the world where, for the most complex tasks that bottleneck important economic decision-making, people will optimize heavily for model quality at the cost of latency until settling on something within 1-2 OOMs of human-level latency.
Separately, current clock speeds don’t really matter on the time scale we’re discussing, physical limits matter. (Though current clock speeds do point at ways in which human subjective speed might be much slower than physical limits.)