Certainly I don’t see fusion reactors, solar panels or (use in electronics of) semiconductors as counterexamples, since each of these was invented at some point, and didn’t gradually evolve from some completely different technology.
Your definition of “discontinuity” seems broadly compatible with my view of the future then. Definitely there are different technologies that are not all outgrowths of one another.
My main point of divergence is:
Now, when a QNI comes along, it doesn’t necessarily look like a discontinuity, because there might be a lot of work to bridge the distance between idea and implementation. And, this work involves a lot of small details. Because of this, the fist version is probably often only a slight improvement on SOTA.
I think that most of the time when a QNI comes along it is worse than the previous thing and takes work to bring up to the level of the previous thing. In small areas no one pays attention until it overtakes SOTA, but in big areas people usually start paying attention (and investing a significant fraction of the prior SOTA’s size) well before the cross-over point. This seems true for solar or fusion, or digital computers, or deep learning for that matter, or self-driving cars or early automobiles.
If that’s right, then you are looking at two continuous curves and you can think about when they cross and you usually start to get a lot of data before the crossover point. And indeed this is obviously how I’m thinking about technologies like deep learning, which are currently useless for virtually all tasks but which I expect to relatively soon overtake alternatives (like humans and other software) in a huge range of very important domains.
And if some other AI technology replaces deep learning, I generally expect the same story. There is a scale at which new things can burst onto the scene, but over time that scale becomes smaller and smaller relative to the scale of the field. At this point the appearance of “bursting onto the scene” is primarily driven by big private projects that don’t talk publicly about what they are doing for a while (e.g. putting in 20 person-years of effort before a public announcement, so that they get data internally but an outsider just sees a discontinuity), but even that seems to be drying up fairly quickly.
I’m not sure what’s the difference between what you’re saying here and what I said about QNIs. Is it that you expect being able to see the emergent technology before the singular (crossover) point? Actually, the fact you describe DL as “currently useless” makes me think we should be talking about progress as a function of two variables: time and “maturity”, where maturity inhabits, roughly speaking, a scale from “theoretical idea” to “proof of concept” to “beats SOTA in lab conditions” to “commercial product”. In this sense, the “lab progress” curve is already past the DL singularity but the “commercial progress” curve maybe isn’t.
On this model, if post-DL AI technology X appears tomorrow, it will take it some time to span the distance from “theoretical idea” to “commercial product”, in which time we would notice it and update our predictions accordingly. But, two things to note here:
First, it’s not clear which level of maturity is the relevant reference point for AI risk. In particular, I don’t think you need commercial levels of maturity for AI technology to become risky, for the reasons I discussed in my previous comment (and, we can also add regulatory barriers to that list, although I am not convinced they are as important as Yudkowsky seems to believe).
Second, all this doesn’t sound to me like “AI systems will grow relatively continuously and predictably”, although maybe I just interpreted this statement differently from its intent. For instance, I agree that it’s unlikely technology X will emerge specifically in the next year, so progress over the next year should be fairly predictable. On the other hand, I don’t think it would be very surprising if technology X emerges in the next decade.
IIUC, part of what you’re saying can be rephrased as: TAI is unlikely to be created by a small team, since once a small team shows something promising, tonnes of resources will be thrown at them (and at other teams that might be able to copy the technology) and they won’t be a small team anymore. Which sounds plausible, I suppose, but doesn’t make TAI predictable that long in advance.
Your definition of “discontinuity” seems broadly compatible with my view of the future then. Definitely there are different technologies that are not all outgrowths of one another.
My main point of divergence is:
I think that most of the time when a QNI comes along it is worse than the previous thing and takes work to bring up to the level of the previous thing. In small areas no one pays attention until it overtakes SOTA, but in big areas people usually start paying attention (and investing a significant fraction of the prior SOTA’s size) well before the cross-over point. This seems true for solar or fusion, or digital computers, or deep learning for that matter, or self-driving cars or early automobiles.
If that’s right, then you are looking at two continuous curves and you can think about when they cross and you usually start to get a lot of data before the crossover point. And indeed this is obviously how I’m thinking about technologies like deep learning, which are currently useless for virtually all tasks but which I expect to relatively soon overtake alternatives (like humans and other software) in a huge range of very important domains.
And if some other AI technology replaces deep learning, I generally expect the same story. There is a scale at which new things can burst onto the scene, but over time that scale becomes smaller and smaller relative to the scale of the field. At this point the appearance of “bursting onto the scene” is primarily driven by big private projects that don’t talk publicly about what they are doing for a while (e.g. putting in 20 person-years of effort before a public announcement, so that they get data internally but an outsider just sees a discontinuity), but even that seems to be drying up fairly quickly.
I’m not sure what’s the difference between what you’re saying here and what I said about QNIs. Is it that you expect being able to see the emergent technology before the singular (crossover) point? Actually, the fact you describe DL as “currently useless” makes me think we should be talking about progress as a function of two variables: time and “maturity”, where maturity inhabits, roughly speaking, a scale from “theoretical idea” to “proof of concept” to “beats SOTA in lab conditions” to “commercial product”. In this sense, the “lab progress” curve is already past the DL singularity but the “commercial progress” curve maybe isn’t.
On this model, if post-DL AI technology X appears tomorrow, it will take it some time to span the distance from “theoretical idea” to “commercial product”, in which time we would notice it and update our predictions accordingly. But, two things to note here:
First, it’s not clear which level of maturity is the relevant reference point for AI risk. In particular, I don’t think you need commercial levels of maturity for AI technology to become risky, for the reasons I discussed in my previous comment (and, we can also add regulatory barriers to that list, although I am not convinced they are as important as Yudkowsky seems to believe).
Second, all this doesn’t sound to me like “AI systems will grow relatively continuously and predictably”, although maybe I just interpreted this statement differently from its intent. For instance, I agree that it’s unlikely technology X will emerge specifically in the next year, so progress over the next year should be fairly predictable. On the other hand, I don’t think it would be very surprising if technology X emerges in the next decade.
IIUC, part of what you’re saying can be rephrased as: TAI is unlikely to be created by a small team, since once a small team shows something promising, tonnes of resources will be thrown at them (and at other teams that might be able to copy the technology) and they won’t be a small team anymore. Which sounds plausible, I suppose, but doesn’t make TAI predictable that long in advance.