The next 30 years seem really less likely to be ‘relatively normal’. My mainline world-model is that nation states will get involved with ML in the next 10 years, and that many industries will be really changed up by ML.
One of my personal measures of psychological health is how many years ahead I feel comfortable making trade-offs for today. This changes over time, I think I feel like I’m a bit healthier now than I was when I wrote this, but still not great. Not sure how to put a number to this, I’ll guess I’m maybe able to go up to 5 years at the minute (the longest ones are when I think about personal health and fitness)? Beyond that feels a bit foolish.
I still resonate a bit with what I wrote here 4 years ago, but definitely less. My guess is if I wrote this today the number I would pick would be “8-12 years” instead of “30″.
Hm, I notice I’m confused a bit about the difference between “ML will blow up as an industry” and “something happens that effects the world more than the internet and smartphones have done so far”.
I think honestly I have a hard time imagining ML stuff that’s massively impactful but isn’t, like, “automating programming”, which seems very-close-to-FOOM to me. I don’t think we can have AGI-complete things without being within like 2 years (or 2 days) of a FOOM.
So then I get split into two worlds, one where it’s “FOOM and extinction” and another world which is “a strong industry that doesn’t do anything especially AGI-complete”. The latter is actually fairly close to “business somewhat-as-usual”, just with a lot more innovation going on, which is kind of nice (while unsettling).
Like, does “automated drone warfare” count as “business-as-usual”? I think maybe it does, it’s part of general innovation and growth that isn’t (to me) clearly more insane than the invention of nukes was.
I think I am expecting massive innovation and that ML will be shaking up the world like we’ve seen in the 1940′s and 1950′s (transistors, DNA, nukes, etc etc). I’m not sure whether to expect 10-100x more than that before FOOM. I think my gut says “probably not” but I do not trust my gut here, it hasn’t lived through even the 1940′s/50′s, never mind other key parts of the scientific and industrial and agricultural and eukaryotic revolutions.
As we see more progress over the next 4 years I expect we’ll be in a better position to judge how radical the change will be before FOOM.
The answer to lc’s original question is then:
My mainline anticipation involves substantially more progress than what I wrote 4 years ago, and I wouldn’t write the same sentences today with the number ’30′. I’m not sure that my new expectation doesn’t count as “business somewhat-as-usual” if I’m expanding it to include the amount of progress in the last century; so if I wrote it today I might still say “business somewhat-as-usual” but over the next 8-12 years, which I do expect will look like a massive shake-up up relative to the last 2 decades. (Unless we manage to slow it down or get a moratorium in big training runs.)
Hey, I think you should also consider how the out-of-nowhere narrative-breaking nature of COVID. Which also happened after you wrote this. It’s not necessarily a proof that the narrative can “break,” but it sure is an example.
And, while I think I read the sequences way longer than 4 years ago, if I remember something it gave me is a sense of “everything can change very, very fast.”
The next 30 years seem really less likely to be ‘relatively normal’. My mainline world-model is that nation states will get involved with ML in the next 10 years, and that many industries will be really changed up by ML.
One of my personal measures of psychological health is how many years ahead I feel comfortable making trade-offs for today. This changes over time, I think I feel like I’m a bit healthier now than I was when I wrote this, but still not great. Not sure how to put a number to this, I’ll guess I’m maybe able to go up to 5 years at the minute (the longest ones are when I think about personal health and fitness)? Beyond that feels a bit foolish.
I still resonate a bit with what I wrote here 4 years ago, but definitely less. My guess is if I wrote this today the number I would pick would be “8-12 years” instead of “30″.
Nation states got involved with ML faster than I expected when I wrote this!
Epistemic status: Thinking out loud some more.
Hm, I notice I’m confused a bit about the difference between “ML will blow up as an industry” and “something happens that effects the world more than the internet and smartphones have done so far”.
I think honestly I have a hard time imagining ML stuff that’s massively impactful but isn’t, like, “automating programming”, which seems very-close-to-FOOM to me. I don’t think we can have AGI-complete things without being within like 2 years (or 2 days) of a FOOM.
So then I get split into two worlds, one where it’s “FOOM and extinction” and another world which is “a strong industry that doesn’t do anything especially AGI-complete”. The latter is actually fairly close to “business somewhat-as-usual”, just with a lot more innovation going on, which is kind of nice (while unsettling).
Like, does “automated drone warfare” count as “business-as-usual”? I think maybe it does, it’s part of general innovation and growth that isn’t (to me) clearly more insane than the invention of nukes was.
I think I am expecting massive innovation and that ML will be shaking up the world like we’ve seen in the 1940′s and 1950′s (transistors, DNA, nukes, etc etc). I’m not sure whether to expect 10-100x more than that before FOOM. I think my gut says “probably not” but I do not trust my gut here, it hasn’t lived through even the 1940′s/50′s, never mind other key parts of the scientific and industrial and agricultural and eukaryotic revolutions.
As we see more progress over the next 4 years I expect we’ll be in a better position to judge how radical the change will be before FOOM.
The answer to lc’s original question is then:
Hey, I think you should also consider how the out-of-nowhere narrative-breaking nature of COVID. Which also happened after you wrote this. It’s not necessarily a proof that the narrative can “break,” but it sure is an example.
And, while I think I read the sequences way longer than 4 years ago, if I remember something it gave me is a sense of “everything can change very, very fast.”