It’s not an intelligence explosion if humans are kept in the loop.
(You could argue that e.g. the Flash Crash resulted because humans weren’t in the loop, but humans can still interrupt the process eventually, so they’re not really out of the loop—they just respond more slowly.)
(You could argue that e.g. the Flash Crash resulted because humans weren’t in the loop, but humans can still interrupt the process eventually, so they’re not really out of the loop—they just respond more slowly.)
A new paper suggests that ultrafast machine trading is causing crashes that don’t last long enough for humans to react:
The speed in which the rises and falls occur might last no longer than half a second, unapparent to any human who is tracking prices. Johnson says if you blink you miss it. Flash events may happen in milliseconds and have nothing to do with a company’s real value.
...
Following the May 2010 event, U.S. regulators, as a safety mechanism, upheld circuit breakers designed to stop trading if a stock price makes a sudden large move. Whether or not that is the best solution around, considering the speed in which today’s machine trading can occur, does not convince all market experts. At that level of resolution, one of the study authors said it was troublesome to even observe, leave alone regulate.
If we want oversight of this kind of trading, it seems like we’ll have to rely on more ultrafast machines.
It’s not an intelligence explosion if humans are kept in the loop.
There is no “loop”, there are many loops, some of which humans have already been eliminated from—through the conventional process of automation.
Humans being in some of the loops does not necessarily even slow things down very much—if the other loops are permitted to whir around at full speed.
In my essay on the topic I cite increasingly infrequent periodic code reviews as an example of how human influence on designing the next generation of dominant creatures could fade away gradually, without causing very much slow-down. That sort of thing might result in a “slightly muffled” explosion, but it would still be an explosion.
It seems to me there’s a continuum between “humans carefully monitoring and controlling a weakish AI system” and “superintelligent AI-in-a-box cleverly manipulates humans in order to wreak havoc”. It seems that as the world transitions from one to the other, at some point it will pass an “intelligence explosion” threshold. But I don’t think that it ever passes a “humans are no longer in the loop” threshold.
It’s not an intelligence explosion if humans are kept in the loop.
(You could argue that e.g. the Flash Crash resulted because humans weren’t in the loop, but humans can still interrupt the process eventually, so they’re not really out of the loop—they just respond more slowly.)
A new paper suggests that ultrafast machine trading is causing crashes that don’t last long enough for humans to react:
If we want oversight of this kind of trading, it seems like we’ll have to rely on more ultrafast machines.
There is no “loop”, there are many loops, some of which humans have already been eliminated from—through the conventional process of automation.
Humans being in some of the loops does not necessarily even slow things down very much—if the other loops are permitted to whir around at full speed.
In my essay on the topic I cite increasingly infrequent periodic code reviews as an example of how human influence on designing the next generation of dominant creatures could fade away gradually, without causing very much slow-down. That sort of thing might result in a “slightly muffled” explosion, but it would still be an explosion.
Is this so?
It seems to me there’s a continuum between “humans carefully monitoring and controlling a weakish AI system” and “superintelligent AI-in-a-box cleverly manipulates humans in order to wreak havoc”. It seems that as the world transitions from one to the other, at some point it will pass an “intelligence explosion” threshold. But I don’t think that it ever passes a “humans are no longer in the loop” threshold.