It is still painful to see the term “Intelligence Explosion” being used to refer only to future developments.
The Intelligence Explosion Is Happening Now. If people think otherwise they should make a case for that. This really matters because—if a process has been going on for thousands of years, then we might already know something about how it operates.
So far, about the only defense of this misleading terminology that I have seen boiled down to: “I . J .Good said so, and we should defer to him”. For the actual argument see the section here titled “Distinguishing the Explosion from the Preceding Build-Up”. I think that this is a pretty feeble argument—which in no way justifies the proposed usage.
A nuclear explosion begins when critical mass is reached. You can’t just define the explosion as starting when the bomb casing shatters—and justify that by saying: that is when the human consequences start to matter. By then the actual explosion has been underway for quite some time.
I think that most of the people who promote the idea of a future-only explosion actually think that the explosion will happen in the future. They really think that there will be a future “ignition point”, after which progress will “take off”. This is a case of bad terminology fostering and promoting bad thinking. These people are just confused about how cultural evolution and evolutionary synergy work.
The “singularity” termminology is also to blame here. That is another case where bad terminology has resulted in bad thinking.
No doubt the picture of a strange future which can’t be understood in terms of past trends appeals to some. If we believe their claim that future is filled with strange new woo that is totally different from what came before then maybe we should pay more attention to thier guide book. Perhaps the “future-only explosion” nonsense is best understood—not as attempted science, but as attempted manipulation. I suspect that this factor is involved in how this bad meme has spread. The weird and different future pictured
can thus be best seen as a sociological phenomenon—rather than a scientific one.
Anyway, I think those involved should snap out of this one. It is pretty much 100% misleading nonsense. You folk should acknowledge the historical roots of the phenomenon and adopt an appropriate classification and naming scheme for it—and stop promoting the idea of a “future-only explosion”.
It’s not an intelligence explosion if humans are kept in the loop.
(You could argue that e.g. the Flash Crash resulted because humans weren’t in the loop, but humans can still interrupt the process eventually, so they’re not really out of the loop—they just respond more slowly.)
(You could argue that e.g. the Flash Crash resulted because humans weren’t in the loop, but humans can still interrupt the process eventually, so they’re not really out of the loop—they just respond more slowly.)
A new paper suggests that ultrafast machine trading is causing crashes that don’t last long enough for humans to react:
The speed in which the rises and falls occur might last no longer than half a second, unapparent to any human who is tracking prices. Johnson says if you blink you miss it. Flash events may happen in milliseconds and have nothing to do with a company’s real value.
...
Following the May 2010 event, U.S. regulators, as a safety mechanism, upheld circuit breakers designed to stop trading if a stock price makes a sudden large move. Whether or not that is the best solution around, considering the speed in which today’s machine trading can occur, does not convince all market experts. At that level of resolution, one of the study authors said it was troublesome to even observe, leave alone regulate.
If we want oversight of this kind of trading, it seems like we’ll have to rely on more ultrafast machines.
It’s not an intelligence explosion if humans are kept in the loop.
There is no “loop”, there are many loops, some of which humans have already been eliminated from—through the conventional process of automation.
Humans being in some of the loops does not necessarily even slow things down very much—if the other loops are permitted to whir around at full speed.
In my essay on the topic I cite increasingly infrequent periodic code reviews as an example of how human influence on designing the next generation of dominant creatures could fade away gradually, without causing very much slow-down. That sort of thing might result in a “slightly muffled” explosion, but it would still be an explosion.
It seems to me there’s a continuum between “humans carefully monitoring and controlling a weakish AI system” and “superintelligent AI-in-a-box cleverly manipulates humans in order to wreak havoc”. It seems that as the world transitions from one to the other, at some point it will pass an “intelligence explosion” threshold. But I don’t think that it ever passes a “humans are no longer in the loop” threshold.
I haven’t read the paper yet, but from all the other material I’ve seen from the SI, an important part of the “intelligence explosion” hypothesis is the establishment of a stable goal/utility function. This is qualitatively unlike what we have now, which is a system of agents competing and trading with each other.
An “intelligence explosion” refers to explosive increases in intelligence. What you are talking about sounds as though it has more to do with the social structure in which agents are embedded.
Do you deny that intelligence has increased recently? What about computers and calculators? What about collective intelligence? What about education? What about the evolution of human beings from chimp-like creatures.
Intelligence on the planet is exploding (in the sense of undergoing exponential growth) already, and has been for a long time—check with the facts.
There isn’t really an “intelligence explosion hypothesis”. The intelligence explosion is a well-established, ongoing event. One might make hypotheses about the future shape of the the intelligence explosion—although there are many of those.
I’ve read the paper, and while it mentions “intelligence explosion” a few times, they seem to be keeping that terminology taboo when it comes to the meat of the argument, which is what I think you were asking for.
Most of the material is phrased in terms of whether AIs will exhibit significantly more intelligence than human-based systems and whether human values will be preserved.
I think most people use “intelligence explosion” to mean something more specific than just exponential growth. But you’re right that we should try and learn what we can about how systems evolve from looking at the past.
I’ve read the paper, and while it mentions “intelligence explosion” a few times, they seem to be keeping that terminology taboo when it comes to the meat of the argument, which is what I think you were asking for.
Yes, this is only a cosmetic issue with the paper, really.
I think most people use “intelligence explosion” to mean something more specific than just exponential growth.
Sure: explosions do also have to wind up going rapidly to qualify as such.
It is still painful to see the term “Intelligence Explosion” being used to refer only to future developments.
The Intelligence Explosion Is Happening Now. If people think otherwise they should make a case for that. This really matters because—if a process has been going on for thousands of years, then we might already know something about how it operates.
So far, about the only defense of this misleading terminology that I have seen boiled down to: “I . J .Good said so, and we should defer to him”. For the actual argument see the section here titled “Distinguishing the Explosion from the Preceding Build-Up”. I think that this is a pretty feeble argument—which in no way justifies the proposed usage.
A nuclear explosion begins when critical mass is reached. You can’t just define the explosion as starting when the bomb casing shatters—and justify that by saying: that is when the human consequences start to matter. By then the actual explosion has been underway for quite some time.
I think that most of the people who promote the idea of a future-only explosion actually think that the explosion will happen in the future. They really think that there will be a future “ignition point”, after which progress will “take off”. This is a case of bad terminology fostering and promoting bad thinking. These people are just confused about how cultural evolution and evolutionary synergy work.
The “singularity” termminology is also to blame here. That is another case where bad terminology has resulted in bad thinking.
No doubt the picture of a strange future which can’t be understood in terms of past trends appeals to some. If we believe their claim that future is filled with strange new woo that is totally different from what came before then maybe we should pay more attention to thier guide book. Perhaps the “future-only explosion” nonsense is best understood—not as attempted science, but as attempted manipulation. I suspect that this factor is involved in how this bad meme has spread. The weird and different future pictured can thus be best seen as a sociological phenomenon—rather than a scientific one.
Anyway, I think those involved should snap out of this one. It is pretty much 100% misleading nonsense. You folk should acknowledge the historical roots of the phenomenon and adopt an appropriate classification and naming scheme for it—and stop promoting the idea of a “future-only explosion”.
It’s not an intelligence explosion if humans are kept in the loop.
(You could argue that e.g. the Flash Crash resulted because humans weren’t in the loop, but humans can still interrupt the process eventually, so they’re not really out of the loop—they just respond more slowly.)
A new paper suggests that ultrafast machine trading is causing crashes that don’t last long enough for humans to react:
If we want oversight of this kind of trading, it seems like we’ll have to rely on more ultrafast machines.
There is no “loop”, there are many loops, some of which humans have already been eliminated from—through the conventional process of automation.
Humans being in some of the loops does not necessarily even slow things down very much—if the other loops are permitted to whir around at full speed.
In my essay on the topic I cite increasingly infrequent periodic code reviews as an example of how human influence on designing the next generation of dominant creatures could fade away gradually, without causing very much slow-down. That sort of thing might result in a “slightly muffled” explosion, but it would still be an explosion.
Is this so?
It seems to me there’s a continuum between “humans carefully monitoring and controlling a weakish AI system” and “superintelligent AI-in-a-box cleverly manipulates humans in order to wreak havoc”. It seems that as the world transitions from one to the other, at some point it will pass an “intelligence explosion” threshold. But I don’t think that it ever passes a “humans are no longer in the loop” threshold.
I haven’t read the paper yet, but from all the other material I’ve seen from the SI, an important part of the “intelligence explosion” hypothesis is the establishment of a stable goal/utility function. This is qualitatively unlike what we have now, which is a system of agents competing and trading with each other.
An “intelligence explosion” refers to explosive increases in intelligence. What you are talking about sounds as though it has more to do with the social structure in which agents are embedded.
Do you deny that intelligence has increased recently? What about computers and calculators? What about collective intelligence? What about education? What about the evolution of human beings from chimp-like creatures.
Intelligence on the planet is exploding (in the sense of undergoing exponential growth) already, and has been for a long time—check with the facts.
There isn’t really an “intelligence explosion hypothesis”. The intelligence explosion is a well-established, ongoing event. One might make hypotheses about the future shape of the the intelligence explosion—although there are many of those.
I’ve read the paper, and while it mentions “intelligence explosion” a few times, they seem to be keeping that terminology taboo when it comes to the meat of the argument, which is what I think you were asking for.
Most of the material is phrased in terms of whether AIs will exhibit significantly more intelligence than human-based systems and whether human values will be preserved.
I think most people use “intelligence explosion” to mean something more specific than just exponential growth. But you’re right that we should try and learn what we can about how systems evolve from looking at the past.
Yes, this is only a cosmetic issue with the paper, really.
Sure: explosions do also have to wind up going rapidly to qualify as such.