a realistic example where I expect the delay would generate a strong incentive for using an agent AGI
I’d guess high speed stock trading. Right now, we already have AI trading stock to maximize profits over significant time horizons way faster than humans can effectively supervise.
We might already have examples of these AIs being misaligned and causing harm. (Maybe.) The 2010 Flash Crash is poorly understood, and few blame it entirely on high frequency trading algorithms. But regulators say that HFTs operating without human supervision were “clearly a contributing factor” to the crash because:
HFTs sold big and fast as soon as the market began dipping, faster than humans likely could’ve, and:
(Probably not a major factor) HFTs may have clogged and confused markets with quote stuffing—”placing and then almost immediately cancelling large numbers of rapid-fire orders to buy or sell stocks”.
To be fair, others say that HFTs were a big part of why the crash was quickly reversed and the market returned to normal.
In any case, all of this happened without any human supervision, and was so opaque that we still don’t understand what happened. That seems like evidence for opaque, unsupervised AIs with broad goals.
Yeah, I worry that competitive pressure could convince people to push for unsafe systems. Military AI seems like an especially risky case. Military goals are harder to specify than “maximize portfolio value”, but there are probably reasonable proxies, and as AI gets more capable and more widely used there’s a strong incentive to get ahead of the competition.
I’d guess high speed stock trading. Right now, we already have AI trading stock to maximize profits over significant time horizons way faster than humans can effectively supervise.
We might already have examples of these AIs being misaligned and causing harm. (Maybe.) The 2010 Flash Crash is poorly understood, and few blame it entirely on high frequency trading algorithms. But regulators say that HFTs operating without human supervision were “clearly a contributing factor” to the crash because:
HFTs sold big and fast as soon as the market began dipping, faster than humans likely could’ve, and:
(Probably not a major factor) HFTs may have clogged and confused markets with quote stuffing—”placing and then almost immediately cancelling large numbers of rapid-fire orders to buy or sell stocks”.
To be fair, others say that HFTs were a big part of why the crash was quickly reversed and the market returned to normal.
In any case, all of this happened without any human supervision, and was so opaque that we still don’t understand what happened. That seems like evidence for opaque, unsupervised AIs with broad goals.
Yeah, I worry that competitive pressure could convince people to push for unsafe systems. Military AI seems like an especially risky case. Military goals are harder to specify than “maximize portfolio value”, but there are probably reasonable proxies, and as AI gets more capable and more widely used there’s a strong incentive to get ahead of the competition.