I’m looking over the table of contents to Intelligence Explosion Microeconomics, and it doesn’t look as though there’s any reference to what seems to me would be the most relevant topic of consideration to an intelligence explosion: returns on AI research. As I previously pointed out, an AGI that was just as “smart” as all the world’s AI researchers combined would make AI progress at the same slow rate they are making AI progress, with no explosion. Having that AI make itself 10% “smarter” (which would take a long time—it’s only as smart as the world’s AI researchers) would only result in self-improvement progress that was 10% faster. In other words, it’d be exponential, yes, but it’d be an exponential like human economic growth, not like a nuclear chain reaction.
The empirical finding that when you combine the brainpower of the world’s AI researchers (who are very smart people, according to a reliable source of mine), they get such low returns in terms of finding new useful AI insights, seems to me like it should weigh more than reasoning by analogy from non-AI domains.
(But even given this empirical finding, the question seems hopelessly uncertain to me, and I’m curious what justification anyone would give from updating strongly from even odds. The most salient observation I made from my recent PredictionBook experiment is that if a question is interesting enough for me to put it in PredictionBook, then I know less than I think about it and I’m best off giving it 50⁄50 odds. I suspect this applies to other humans, e.g. Jonah Sinick expressed a similar sentiment to me the other day. So a priori, the very fact that two smart people, Robin and Eliezer, take opposite sides of an issue should make us reluctant to assign any strong probabilities… I think :P)
So a priori, the very fact that two smart people, Robin and Eliezer, take opposite sides of an issue should make us reluctant to assign any strong probabilities… I think :P
Suppose experts’ opinions were assigned by coin flip with a weighted coin, where the weight of the coin is the probability that makes best use of available information.
If we go to the first expert and they hold opinion Heads, what do we think the weighting of the coin is? 2⁄3. But then another expert comes along with opinion Tails, and so our probability goes back to 1⁄2. Last, we meet another expert with opinion Heads. But jaded as we are, we only update our probability to 3⁄5 - or 0.6 rather than 0.66666.
So, sure. :P Although this sort of model makes less sense once you start evaluating the rhyme and reason behind the experts’ opinions rather than just taking them as opaque data points.
I don’t really trust Robin and Eliezer to be well-calibrated about what they don’t know. One way to become a public figure is to make interesting predictions, and both have used this strategy. So polling public-figure-ish smart people as opposed to smart people in general will tend to get us a more confidently expressed and interesting-for-the-sake-of-interesting set of opinions. Also, neither has a PredictionBook account that’s actively used (as far as I know; I’ve recently been using a pseudonym and maybe one of them is as well).
For some perspective, my younger brother Tim is very smart and in his years of peak intelligence, but does not have the high status associated with writing a widely read blog or being a professor, and his view on singularity related stuff, as far as I can tell, is that the future is too hard to predict for it to be worth bothering with. You could say that Robin and Eliezer are authorities on singularity-related topics because they write widely read blogs about them, but they write widely read blogs about them because they have positive predictions to make. So there’s a selection effect. If a smart person thinks the future is very uncertain, they aren’t going to put in the time & effort necessary to seem like a legitimate authority on the topic. (If you want someone who’s an authority on another topic who seems to agree with my brother, here’s Daniel Kahnman.)
This poll of Jane Street Capital geniuses seem like an even stronger argument that we shouldn’t have a strong opinion in either direction.
When any speedup of 10% takes a constant amount n of computations, you get, for the computational speed f, the approximating differential equation f’ [increase in speed over time] = 0.1f [10% increase] / n/f [time needed for that increase].
When any speedup of 10% takes a constant amount n of computations
I didn’t make this assumption—my model assumes that increasing the brainpower of an already-very-smart intelligence by 10% would be harder for a human AI researcher than increasing the brainpower of a pretty-dumb intelligence by 10%. It is an interesting assumption to consider, however.
Anyway, exponential growth is for quantities that grow at a rate directly proportionate to the quantity. So if you can improve your intelligence at a rate that’s a constant multiple of how smart you are, then we’d expect to see your intelligence grow exponentially. Given data from humans trying to build AIs, we should expect this constant multiple to be pretty low. If you want a somewhat more detailed justification, you can take a stab at reading my original essay on this topic (warning: has some bad/incorrect ideas; read the comments).
An AI as smart as all the world’s AI scientists would make progress faster than them (not saying how much faster or if it would foom) because
A it would be perfectly coordinated. If every AI researcher knew what every other was thinking about AI ideas would be tested and examined faster. Communication would be not a problem, whiteboards unnecessary
B I’m not sure if you meant smartness as Intelligence or as optimization power but an AI that had the combined intelligence of all the AI researchers would have MORE optimization power because this intelligence would not be held back by emotions, sleep, or human biases (except those accidentally built-in to the AI)
C Faster iteration: All the AI scientists can’t actually test and run a change in the code of an AI because there’s both no code and they don’t have the supercomputer. Once you have an AI running on a computer it can implement good ideas or test ideas for goodness far faster and more empirically.
D It can do actual empiricism on AI mind, as opposed to what AI researchers now can do.
See also what Randaly said. For similar reasons as Robin Hanson’s Emu Hell, a digital computer-bound mind will just be better at a lot of tasks than ones running in meat.
A it would be perfectly coordinated. If every AI researcher knew what every other was thinking about AI ideas would be tested and examined faster. Communication would be not a problem, whiteboards unnecessary
I don’t think that has to be true. For some AI design it might be, for other it might be false.
B I’m not sure if you meant smartness as Intelligence or as optimization power but an AI that had the combined intelligence of all the AI researchers would have MORE optimization power because this intelligence would not be held back by emotions, sleep, or human biases (except those accidentally built-in to the AI)
I think you underrate the use of human heuristics and human emotions. Human biases happen because our heuristics have some weaknesses. It however doesn’t mean that our heuristics aren’t pretty good.
To clarify: it’s as smart as them in the sense that when you take in to account factors A-D (and similar factors), its intellectual output on any problem (including AGI research) would be similar.
It sounds like with factor C, you are saying that you expect AI insights to come faster once an working, if slow, AGI implementation is available. I don’t think this is obvious. “Once you have an AI running on a computer it can implement good ideas or test ideas for goodness far faster and more empirically.” We already have computers available for testing out AI insights.
Not really? You can test out pattern-matching or whatnot algorithms but you can’t test out anything more generalized or as part of anything more generalized like problem-solving algorithms because those would require you to have an entity within which to run them.
Hm, plausible. To restate: if a mind consists of lots of little components that are useless on their own and only work well on concert, then it’ll be hard to optimize any given component without the mind that surrounds it, because it’s hard to gather data on which component designs work better. Does that seem accurate?
yes though I think face and voice recognition software show us that many of the algorithms can be useful on their own. But eg: an algorithm for prioritizing face vs voice recognition when talking to humans is not.
I also think all the AI developers in the world will have a higher pace of developments post functioning AIs than beforehand, which might be relevant if fooming for whatever reason does not take place. Sort of like Steam Engine time: All the little valves and pressure regulators someone could invent and try out will suddenly become a thing
As I previously pointed out, an AGI that was just as “smart” as all the world’s AI researchers combined would make AI progress at the same slow rate they are making AI progress, with no explosion.
Human neurons run at ~100 hertz; computer run at a few gigahertz. An AGI would be millions of times faster by default.
Having that AI make itself 10% “smarter” (which would take a long time—it’s only as smart as the world’s AI researchers) would only result in self-improvement progress that was 10% faster.
Human neurons run at ~100 hertz; computer run at a few gigahertz. An AGI would be millions of times faster by default.
I don’t think that’s a reasonable comparison. Neurons do a bunch of calculations. It’s easy to imagine a AI that would have need a second to get from one mental state to the next.
Human neurons run at ~100 hertz; computer run at a few gigahertz. An AGI would be millions of times faster by default.
By that logic, we should have an AI that’s millions of times faster than a human already merely by the virtue of being implemented on silicon hardware.
There are two inputs to intelligence: software and hardware. Combined, they produce a system that reasons. When the artificial system that reasons is brought to a point where it is as “smart” as the world’s AI researchers, it will produce AI insights at the same speed they do. I don’t see how the combination of software/hardware inputs that produced that result is especially important.
This seems wrong to me- can you justify this?
Which part? The part where the AI makes progress at the rate I said it would in the previous paragraph? In that case, your issue is with the previous paragraph, as I’m just restating what I already said. The only new thing is that an AI that’s 10% smarter would find insights 10% faster… I agree that I’m being kinda handwavey here, but hopefully the waving is vaguely plausbile.
I’m looking over the table of contents to Intelligence Explosion Microeconomics, and it doesn’t look as though there’s any reference to what seems to me would be the most relevant topic of consideration to an intelligence explosion: returns on AI research. As I previously pointed out, an AGI that was just as “smart” as all the world’s AI researchers combined would make AI progress at the same slow rate they are making AI progress, with no explosion. Having that AI make itself 10% “smarter” (which would take a long time—it’s only as smart as the world’s AI researchers) would only result in self-improvement progress that was 10% faster. In other words, it’d be exponential, yes, but it’d be an exponential like human economic growth, not like a nuclear chain reaction.
The empirical finding that when you combine the brainpower of the world’s AI researchers (who are very smart people, according to a reliable source of mine), they get such low returns in terms of finding new useful AI insights, seems to me like it should weigh more than reasoning by analogy from non-AI domains.
(But even given this empirical finding, the question seems hopelessly uncertain to me, and I’m curious what justification anyone would give from updating strongly from even odds. The most salient observation I made from my recent PredictionBook experiment is that if a question is interesting enough for me to put it in PredictionBook, then I know less than I think about it and I’m best off giving it 50⁄50 odds. I suspect this applies to other humans, e.g. Jonah Sinick expressed a similar sentiment to me the other day. So a priori, the very fact that two smart people, Robin and Eliezer, take opposite sides of an issue should make us reluctant to assign any strong probabilities… I think :P)
Suppose experts’ opinions were assigned by coin flip with a weighted coin, where the weight of the coin is the probability that makes best use of available information.
If we go to the first expert and they hold opinion Heads, what do we think the weighting of the coin is? 2⁄3. But then another expert comes along with opinion Tails, and so our probability goes back to 1⁄2. Last, we meet another expert with opinion Heads. But jaded as we are, we only update our probability to 3⁄5 - or 0.6 rather than 0.66666.
So, sure. :P Although this sort of model makes less sense once you start evaluating the rhyme and reason behind the experts’ opinions rather than just taking them as opaque data points.
I don’t really trust Robin and Eliezer to be well-calibrated about what they don’t know. One way to become a public figure is to make interesting predictions, and both have used this strategy. So polling public-figure-ish smart people as opposed to smart people in general will tend to get us a more confidently expressed and interesting-for-the-sake-of-interesting set of opinions. Also, neither has a PredictionBook account that’s actively used (as far as I know; I’ve recently been using a pseudonym and maybe one of them is as well).
For some perspective, my younger brother Tim is very smart and in his years of peak intelligence, but does not have the high status associated with writing a widely read blog or being a professor, and his view on singularity related stuff, as far as I can tell, is that the future is too hard to predict for it to be worth bothering with. You could say that Robin and Eliezer are authorities on singularity-related topics because they write widely read blogs about them, but they write widely read blogs about them because they have positive predictions to make. So there’s a selection effect. If a smart person thinks the future is very uncertain, they aren’t going to put in the time & effort necessary to seem like a legitimate authority on the topic. (If you want someone who’s an authority on another topic who seems to agree with my brother, here’s Daniel Kahnman.)
This poll of Jane Street Capital geniuses seem like an even stronger argument that we shouldn’t have a strong opinion in either direction.
When any speedup of 10% takes a constant amount n of computations, you get, for the computational speed f, the approximating differential equation f’ [increase in speed over time] = 0.1f [10% increase] / n/f [time needed for that increase].
This diverges in finite time. Where are you getting exponential growth from?
I didn’t make this assumption—my model assumes that increasing the brainpower of an already-very-smart intelligence by 10% would be harder for a human AI researcher than increasing the brainpower of a pretty-dumb intelligence by 10%. It is an interesting assumption to consider, however.
Anyway, exponential growth is for quantities that grow at a rate directly proportionate to the quantity. So if you can improve your intelligence at a rate that’s a constant multiple of how smart you are, then we’d expect to see your intelligence grow exponentially. Given data from humans trying to build AIs, we should expect this constant multiple to be pretty low. If you want a somewhat more detailed justification, you can take a stab at reading my original essay on this topic (warning: has some bad/incorrect ideas; read the comments).
An AI as smart as all the world’s AI scientists would make progress faster than them (not saying how much faster or if it would foom) because
A it would be perfectly coordinated. If every AI researcher knew what every other was thinking about AI ideas would be tested and examined faster. Communication would be not a problem, whiteboards unnecessary
B I’m not sure if you meant smartness as Intelligence or as optimization power but an AI that had the combined intelligence of all the AI researchers would have MORE optimization power because this intelligence would not be held back by emotions, sleep, or human biases (except those accidentally built-in to the AI)
C Faster iteration: All the AI scientists can’t actually test and run a change in the code of an AI because there’s both no code and they don’t have the supercomputer. Once you have an AI running on a computer it can implement good ideas or test ideas for goodness far faster and more empirically.
D It can do actual empiricism on AI mind, as opposed to what AI researchers now can do.
See also what Randaly said. For similar reasons as Robin Hanson’s Emu Hell, a digital computer-bound mind will just be better at a lot of tasks than ones running in meat.
I don’t think that has to be true. For some AI design it might be, for other it might be false.
I think you underrate the use of human heuristics and human emotions. Human biases happen because our heuristics have some weaknesses. It however doesn’t mean that our heuristics aren’t pretty good.
To clarify: it’s as smart as them in the sense that when you take in to account factors A-D (and similar factors), its intellectual output on any problem (including AGI research) would be similar.
It sounds like with factor C, you are saying that you expect AI insights to come faster once an working, if slow, AGI implementation is available. I don’t think this is obvious. “Once you have an AI running on a computer it can implement good ideas or test ideas for goodness far faster and more empirically.” We already have computers available for testing out AI insights.
Not really? You can test out pattern-matching or whatnot algorithms but you can’t test out anything more generalized or as part of anything more generalized like problem-solving algorithms because those would require you to have an entity within which to run them.
Hm, plausible. To restate: if a mind consists of lots of little components that are useless on their own and only work well on concert, then it’ll be hard to optimize any given component without the mind that surrounds it, because it’s hard to gather data on which component designs work better. Does that seem accurate?
yes though I think face and voice recognition software show us that many of the algorithms can be useful on their own. But eg: an algorithm for prioritizing face vs voice recognition when talking to humans is not.
I also think all the AI developers in the world will have a higher pace of developments post functioning AIs than beforehand, which might be relevant if fooming for whatever reason does not take place. Sort of like Steam Engine time: All the little valves and pressure regulators someone could invent and try out will suddenly become a thing
Human neurons run at ~100 hertz; computer run at a few gigahertz. An AGI would be millions of times faster by default.
This seems wrong to me- can you justify this?
I don’t think that’s a reasonable comparison. Neurons do a bunch of calculations. It’s easy to imagine a AI that would have need a second to get from one mental state to the next.
By that logic, we should have an AI that’s millions of times faster than a human already merely by the virtue of being implemented on silicon hardware.
There are two inputs to intelligence: software and hardware. Combined, they produce a system that reasons. When the artificial system that reasons is brought to a point where it is as “smart” as the world’s AI researchers, it will produce AI insights at the same speed they do. I don’t see how the combination of software/hardware inputs that produced that result is especially important.
Which part? The part where the AI makes progress at the rate I said it would in the previous paragraph? In that case, your issue is with the previous paragraph, as I’m just restating what I already said. The only new thing is that an AI that’s 10% smarter would find insights 10% faster… I agree that I’m being kinda handwavey here, but hopefully the waving is vaguely plausbile.