I think it’s a pity that we’re not focusing on what we could do to test the tool vs general AI distinction. For example, here’s one near-future test: how do we humans deal with drones?
Drones are exploding in popularity, are increasing their capabilities constantly, and are coveted by countless security agencies and private groups for their tremendous use in all sorts of roles both benign and disturbing. Just like AIs would be. The tool vs general AI distinction maps very nicely onto drones as well: a tool AI corresponds to a drone being manually flown by a human pilot somewhere, while a general AI would correspond to an autonomous drone which is carrying out some mission (blast insurgents?).
So, here is a near-future test of the question ‘are people likely to let tool AIs ‘drive themselves’ for greater efficiency?′ - simply ask whether in, say, a decade there are autonomous drones carrying tasks that now would only be carried out by piloted drones.
If in a decade we learn that autonomous drones are killing people, then we have an answer to our tool AI question: it doesn’t matter because given a tool AI, people will just turn it into a general AI.
Such events don’t always become public. In a New York coffeehouse, a former high-frequency trader told me matter of factly that one of his colleagues had once made the simplest of slip-ups in a program: what mathematicians call a ‘sign error’, interchanging a plus and a minus. When the program started to run it behaved rather like the Knight program, building bigger and bigger trading positions, in this case at an exponential rate: doubling them, then redoubling them, and so on. ‘It took him 52 seconds to realise what was happening, something was terribly wrong, and he pressed the red button,’ stopping the program. ‘By then we had lost $3 million.’ The trader’s manager calculated ‘that in another twenty seconds at the rate of the geometric progression,’ the trading firm would have been bankrupt, ‘and in another fifty or so seconds, our clearing broker’ – a major Wall Street investment bank – ‘would have been bankrupt, because of course if we’re bankrupt our clearing broker is responsible for our debts … it wouldn’t have been too many seconds after that the whole market would have gone.’
What is most telling about that story is that not long previously it couldn’t have happened. High-frequency firms are sharply aware of the risks of bugs in programs, and at one time my informant’s firm used an automated check that would have stopped the errant program well before its human user spotted that anything was wrong. However, the firm had been losing out in the speed race, so had launched what my informant called ‘a war on latency’, trying to remove all detectable sources of delay. Unfortunately, the risk check had been one of those sources.
(Memoirs from US drone operators suggest that the bureaucratic organizations in charge of racking up kill-counts have become disturbingly cavalier about not doing their homework on the targets they’re blowing up, but thus far, anyway, they haven’t made the drones fully autonomous.)
I think it’s a pity that we’re not focusing on what we could do to test the tool vs general AI distinction. For example, here’s one near-future test: how do we humans deal with drones?
Drones are exploding in popularity, are increasing their capabilities constantly, and are coveted by countless security agencies and private groups for their tremendous use in all sorts of roles both benign and disturbing. Just like AIs would be. The tool vs general AI distinction maps very nicely onto drones as well: a tool AI corresponds to a drone being manually flown by a human pilot somewhere, while a general AI would correspond to an autonomous drone which is carrying out some mission (blast insurgents?).
So, here is a near-future test of the question ‘are people likely to let tool AIs ‘drive themselves’ for greater efficiency?′ - simply ask whether in, say, a decade there are autonomous drones carrying tasks that now would only be carried out by piloted drones.
If in a decade we learn that autonomous drones are killing people, then we have an answer to our tool AI question: it doesn’t matter because given a tool AI, people will just turn it into a general AI.
(Amdahl’s law: if the human in the loop takes up 10% of the time, and the AI or drone part comprises the other 90%, then even if the drone or AI become infinitely fast, you will still never speed up the whole loop by more than 90%… until you hand over that 10% to the AI, that is. EDIT: See also https://web.archive.org/web/20121122150219/http://lesswrong.com/lw/f53/now_i_appreciate_agency/7q4o )
Besides Knight Capital, HFT may provide another example of near-disaster from economic incentives forcing the removal of safety guidelines from narrow AI. From the LRB’s “Be grateful for drizzle: Donald MacKenzie on high-frequency trading”:
(Memoirs from US drone operators suggest that the bureaucratic organizations in charge of racking up kill-counts have become disturbingly cavalier about not doing their homework on the targets they’re blowing up, but thus far, anyway, they haven’t made the drones fully autonomous.)