If you continuously improve a system’s speed, then the speed with which each fixed task can be accomplished will be continuously reduced. However, if you continuously improve a system’s quality, then you may see discontinuous jumps in the time required to accomplish certain tasks. So if we think about these dimensions as possible improvements rather than types of superintelligence, it seems there is a distinction.
This is something which we see often. For example, I might improve an approximation algorithm by speeding it up, or by improving its approximation ratio (and in practice we see both kinds of improvements, at least in theory). In the former case, every problem gets 10% faster with each 10% improvement. In the latter case, there are certain problems (such as “find a cut in this graph which is within 15% of the maximal possible size”) for which the running time jumps discontinuously overnight.
You see a similar tradeoff in machine learning, where some changes improve the quality of solution you can achieve (e.g. reducing the classification error) and others let you achieve similar quality solutions faster.
This seems like a really important distinction from the perspective of evaluating the plausibility of a fast takeoff. One quesiton I’d love to see more work on is exactly what is going on in normal machine learning progress. In particular, to what extent are we really seeing quality improvements, vs. speed improvements + an unwillingness to do fine-tuning for really expensive algorithms? The latter model is consistent with my knowledge of the field, but has very different implications for forecasts.
If we push ourselves a bit, I think we can establish the plausibility of a fast takeoff. We have to delve into the individual components of intelligence deeply, however.
Thinking about discontinuous jumps: So, improving a search algorithm from order n squared to order n log(n) is a discontinuous jump. It appears to be a jump in speed...
However, using an improved algorithm to search a space of possible designs, plans or theorems an order of magnitude faster could seem indistinguishable from a jump in quality.
Reducing error rates seems like an improvement in quality, yet it may be possible to reduce error rates, for example, by running more trials of an experiment. Here, speed seems to have produced quality.
Going the other way around, switching a clinical trial from a frequentist design to an adaptive Bayesian design seems like an improvement in quality-yet the frequentist trial can be made just as valid if we make more trials. An apparent improvement in quality is overcome by speed.
If you continuously improve a system’s speed, then the speed with which each fixed task can be accomplished will be continuously reduced. However, if you continuously improve a system’s quality, then you may see discontinuous jumps in the time required to accomplish certain tasks. So if we think about these dimensions as possible improvements rather than types of superintelligence, it seems there is a distinction.
This is something which we see often. For example, I might improve an approximation algorithm by speeding it up, or by improving its approximation ratio (and in practice we see both kinds of improvements, at least in theory). In the former case, every problem gets 10% faster with each 10% improvement. In the latter case, there are certain problems (such as “find a cut in this graph which is within 15% of the maximal possible size”) for which the running time jumps discontinuously overnight.
You see a similar tradeoff in machine learning, where some changes improve the quality of solution you can achieve (e.g. reducing the classification error) and others let you achieve similar quality solutions faster.
This seems like a really important distinction from the perspective of evaluating the plausibility of a fast takeoff. One quesiton I’d love to see more work on is exactly what is going on in normal machine learning progress. In particular, to what extent are we really seeing quality improvements, vs. speed improvements + an unwillingness to do fine-tuning for really expensive algorithms? The latter model is consistent with my knowledge of the field, but has very different implications for forecasts.
If we push ourselves a bit, I think we can establish the plausibility of a fast takeoff. We have to delve into the individual components of intelligence deeply, however.
Thinking about discontinuous jumps: So, improving a search algorithm from order n squared to order n log(n) is a discontinuous jump. It appears to be a jump in speed...
However, using an improved algorithm to search a space of possible designs, plans or theorems an order of magnitude faster could seem indistinguishable from a jump in quality.
Reducing error rates seems like an improvement in quality, yet it may be possible to reduce error rates, for example, by running more trials of an experiment. Here, speed seems to have produced quality.
Going the other way around, switching a clinical trial from a frequentist design to an adaptive Bayesian design seems like an improvement in quality-yet the frequentist trial can be made just as valid if we make more trials. An apparent improvement in quality is overcome by speed.