As pointed out in note 14, humans can solve all computable problems, because they can carry out the steps of running a Turing machine (very slowly), which we know/suspect can do everything computable. It would seem then that a quality superintelligence is just radically faster than a human at these problems. Is it different to a speed superintelligence?
If you continuously improve a system’s speed, then the speed with which each fixed task can be accomplished will be continuously reduced. However, if you continuously improve a system’s quality, then you may see discontinuous jumps in the time required to accomplish certain tasks. So if we think about these dimensions as possible improvements rather than types of superintelligence, it seems there is a distinction.
This is something which we see often. For example, I might improve an approximation algorithm by speeding it up, or by improving its approximation ratio (and in practice we see both kinds of improvements, at least in theory). In the former case, every problem gets 10% faster with each 10% improvement. In the latter case, there are certain problems (such as “find a cut in this graph which is within 15% of the maximal possible size”) for which the running time jumps discontinuously overnight.
You see a similar tradeoff in machine learning, where some changes improve the quality of solution you can achieve (e.g. reducing the classification error) and others let you achieve similar quality solutions faster.
This seems like a really important distinction from the perspective of evaluating the plausibility of a fast takeoff. One quesiton I’d love to see more work on is exactly what is going on in normal machine learning progress. In particular, to what extent are we really seeing quality improvements, vs. speed improvements + an unwillingness to do fine-tuning for really expensive algorithms? The latter model is consistent with my knowledge of the field, but has very different implications for forecasts.
If we push ourselves a bit, I think we can establish the plausibility of a fast takeoff. We have to delve into the individual components of intelligence deeply, however.
Thinking about discontinuous jumps: So, improving a search algorithm from order n squared to order n log(n) is a discontinuous jump. It appears to be a jump in speed...
However, using an improved algorithm to search a space of possible designs, plans or theorems an order of magnitude faster could seem indistinguishable from a jump in quality.
Reducing error rates seems like an improvement in quality, yet it may be possible to reduce error rates, for example, by running more trials of an experiment. Here, speed seems to have produced quality.
Going the other way around, switching a clinical trial from a frequentist design to an adaptive Bayesian design seems like an improvement in quality-yet the frequentist trial can be made just as valid if we make more trials. An apparent improvement in quality is overcome by speed.
Turing machine (very slowly), which we know/suspect can do everything computable. It would seem then that a quality superintelligence is just radically faster than a human at these problems.
I think that statement is misleading, here. To solve a real-world problem on a TM, you do need to figure out an algorithm that solves your problem. If a Dark Lord showed up and handed me a (let’s say ridiculously fast compared to any computer realizable on what looks like our physics) UTM—and I then gave that UTM to a monkey—the monkey may have a fairly good idea of what it’d want (unlimited bananas! unlimited high-desirability sex partners!), but it wouldn’t have any idea of how to use the UTM to get it.
If I tried to use that UTM myself, my chances would probably be better—I can think of some interesting and fairly safe uses to put a powerful computer to—but it still wouldn’t easily allow me to change everything I’d want changed in this world, or even give me an easy way to come up with a really good strategy to doing so. In the end, my mental limits on how to decide on algorithms to deal with specific real-world issues would still be very relevant.
If humans merely fail at any particular mechanical operation with 5% probability, then of course you could implement your computations in some form that was resistant to such errors. Even if you had a more complicated error pattern, where you might e.g. fail in a byzantine way during interval of power-law distributed length, or failed at each type of task with 5% probability (but would fail at the task every time if you repeated it), then it seems not-so-hard to implement turing machines in a robust way.
Based on that experience, I am going to say that a massive team of people would have a hard time with the task.
If you want to understand the limits of human accuracy in this kind of task, you can look at how well people do double-entry bookkeeping. It’s a painfully slow and error-prone process.
Error rates are a fundamental element of intelligence, whether we are taking a standardized test or trying to succeed in a practical environment like administering health care or driving.
The theoretical point is interesting, but I am going to argue that error rates are fundamental to intelligence. I would like some help with the nuances.
There may be a distinction to be made between an agent who could do any intellectual task if they carried out the right procedure, and an agent who can figure out for themselves which procedure to perform. While most humans could implement a turing machine of some kind if they were told how, and wanted to, it’s not obvious they could arrange this from their current state.
That’s a separate topic from error rate, which I still want help with, but also interesting.
Figuring out what procedure to perform is a kind of design task.
Designing includes:
-Defining goals and needs
-Defining a space of alternatives
-Searching a space of alternatives, hopefully with some big shortcuts
-Possibly optimizing
-Testing and iteration
Design is something that people fail at, over and over. They are successful enough of the time to build civilizations.
I feel that design is a fundamental element of quality and collective intelligence. I would love to sort through it in more detail.
As pointed out in note 14, humans can solve all computable problems, because they can carry out the steps of running a Turing machine (very slowly), which we know/suspect can do everything computable. It would seem then that a quality superintelligence is just radically faster than a human at these problems. Is it different to a speed superintelligence?
If you continuously improve a system’s speed, then the speed with which each fixed task can be accomplished will be continuously reduced. However, if you continuously improve a system’s quality, then you may see discontinuous jumps in the time required to accomplish certain tasks. So if we think about these dimensions as possible improvements rather than types of superintelligence, it seems there is a distinction.
This is something which we see often. For example, I might improve an approximation algorithm by speeding it up, or by improving its approximation ratio (and in practice we see both kinds of improvements, at least in theory). In the former case, every problem gets 10% faster with each 10% improvement. In the latter case, there are certain problems (such as “find a cut in this graph which is within 15% of the maximal possible size”) for which the running time jumps discontinuously overnight.
You see a similar tradeoff in machine learning, where some changes improve the quality of solution you can achieve (e.g. reducing the classification error) and others let you achieve similar quality solutions faster.
This seems like a really important distinction from the perspective of evaluating the plausibility of a fast takeoff. One quesiton I’d love to see more work on is exactly what is going on in normal machine learning progress. In particular, to what extent are we really seeing quality improvements, vs. speed improvements + an unwillingness to do fine-tuning for really expensive algorithms? The latter model is consistent with my knowledge of the field, but has very different implications for forecasts.
If we push ourselves a bit, I think we can establish the plausibility of a fast takeoff. We have to delve into the individual components of intelligence deeply, however.
Thinking about discontinuous jumps: So, improving a search algorithm from order n squared to order n log(n) is a discontinuous jump. It appears to be a jump in speed...
However, using an improved algorithm to search a space of possible designs, plans or theorems an order of magnitude faster could seem indistinguishable from a jump in quality.
Reducing error rates seems like an improvement in quality, yet it may be possible to reduce error rates, for example, by running more trials of an experiment. Here, speed seems to have produced quality.
Going the other way around, switching a clinical trial from a frequentist design to an adaptive Bayesian design seems like an improvement in quality-yet the frequentist trial can be made just as valid if we make more trials. An apparent improvement in quality is overcome by speed.
I think that statement is misleading, here. To solve a real-world problem on a TM, you do need to figure out an algorithm that solves your problem. If a Dark Lord showed up and handed me a (let’s say ridiculously fast compared to any computer realizable on what looks like our physics) UTM—and I then gave that UTM to a monkey—the monkey may have a fairly good idea of what it’d want (unlimited bananas! unlimited high-desirability sex partners!), but it wouldn’t have any idea of how to use the UTM to get it.
If I tried to use that UTM myself, my chances would probably be better—I can think of some interesting and fairly safe uses to put a powerful computer to—but it still wouldn’t easily allow me to change everything I’d want changed in this world, or even give me an easy way to come up with a really good strategy to doing so. In the end, my mental limits on how to decide on algorithms to deal with specific real-world issues would still be very relevant.
Humans cannot simulate a Turing machine because they are too inaccurate.
If humans merely fail at any particular mechanical operation with 5% probability, then of course you could implement your computations in some form that was resistant to such errors. Even if you had a more complicated error pattern, where you might e.g. fail in a byzantine way during interval of power-law distributed length, or failed at each type of task with 5% probability (but would fail at the task every time if you repeated it), then it seems not-so-hard to implement turing machines in a robust way.
In one of my classes I emulated a Turing machine.
Based on that experience, I am going to say that a massive team of people would have a hard time with the task.
If you want to understand the limits of human accuracy in this kind of task, you can look at how well people do double-entry bookkeeping. It’s a painfully slow and error-prone process.
Error rates are a fundamental element of intelligence, whether we are taking a standardized test or trying to succeed in a practical environment like administering health care or driving.
The theoretical point is interesting, but I am going to argue that error rates are fundamental to intelligence. I would like some help with the nuances.
There may be a distinction to be made between an agent who could do any intellectual task if they carried out the right procedure, and an agent who can figure out for themselves which procedure to perform. While most humans could implement a turing machine of some kind if they were told how, and wanted to, it’s not obvious they could arrange this from their current state.
That’s a separate topic from error rate, which I still want help with, but also interesting.
Figuring out what procedure to perform is a kind of design task.
Designing includes:
-Defining goals and needs -Defining a space of alternatives -Searching a space of alternatives, hopefully with some big shortcuts -Possibly optimizing -Testing and iteration
Design is something that people fail at, over and over. They are successful enough of the time to build civilizations.
I feel that design is a fundamental element of quality and collective intelligence. I would love to sort through it in more detail.