Thank you for the patient explanation! This is an interesting argument that I’ll have to think about some more, but I’ve already adjusted my view of how I expect things to go based on it.
Two questions:
First, isn’t algorithmic trading a counterexample to your argument? It’s true that it’s a narrow domain, but it’s also one where AI systems are trusted with enormous sums of money, and have the potential to make enormous losses. E.g. one company apparently lost $440 million in less than an hour due to a glitch in their software. Wikipedia on the consequences:
Knight Capital took a pre-tax loss of $440 million. This caused Knight Capital’s stock price to collapse, sending shares lower by over 70% from before the announcement. The nature of the Knight Capital’s unusual trading activity was described as a “technology breakdown”.[14][15]
On Sunday, August 5 the company managed to raise around $400 million from half a dozen investors led by Jefferies in an attempt to stay in business after the trading error. Jefferies’ CEO, Richard Handler and Executive Committee Chair Brian Friedman structured and led the rescue and Jefferies purchased $125 million of the $400 million investment and became Knight’s largest shareholder. [2]. The financing would be in the form of convertible securities, bonds that turn into equity in the company at a fixed price in the future.[16]
The incident was embarrassing for Knight CEO Thomas Joyce, who was an outspoken critic of Nasdaq’s handling of Facebook’s IPO.[17] On the same day the company’s stock plunged 33 percent, to $3.39; by the next day 75 percent of Knight’s equity value had been erased.[18]
Also, you give several examples of AGIs potentially making large mistakes with large consequences, but couldn’t e.g. a human strategist make a similarly big mistake as well?
You suggest that the corporate leadership could be held more responsible for a mistake by an AGI than if a human employer made the mistake, and I agree that this is definitely plausible. But I’m not sure whether it’s inevitable. If the AGI was initially treated the way a junior human employee would, i.e. initially kept subject to more supervision and given more limited responsibilities, and then had its responsibilities scaled up as people came to trust it more and it learned from its mistakes, would that necessarily be considered irresponsible by the shareholders and insurers? (There’s also the issue of privately held companies with no need to keep external shareholders satisfied.)
one where AI systems are trusted with enormous sums of money
Kinda. They are carefully watched and have separate risk management systems which impose constraints and limits on what they can do.
E.g. one company apparently lost $440 million in less than an hour due to a glitch in their software.
Yes, but that has nothing to do with AI: “To err is human, but to really screw up you need a computer”. Besides, there are equivalent human errors (fat fingers, add a few zeros to a trade inadvertently) with equivalent magnitude of losses.
have separate risk management systems which impose constraints and limits on what they can do.
If those risk management systems are themselves software, that doesn’t really change the overall picture.
Yes, but that has nothing to do with AI:
If we’re talking about “would companies place AI systems in a role where those systems could cost the company lots of money if they malfunctioned”, then examples of AI systems having been placed in roles where they cost the company a lot of money have everything to do with the discussion.
In the usual way. Contemporary trading systems are not black boxes full of elven magic. They are models, that is, a bunch of code and some data. If the model doesn’t do what you want it to do, you stick your hands in there and twiddle the doohickeys until it stops outputting twaddle.
Besides, in most trading systems the sophisticated part (“AI”) is an oracle. Typically it outputs predictions (e.g. of prices of financial assets) and its utility function is some loss function on the difference between the prediction and the actual. It has no concept of trades, or dollars, or position limits.
Translating these predictions into trades is usually quite straightforward.
Thank you for the patient explanation! This is an interesting argument that I’ll have to think about some more, but I’ve already adjusted my view of how I expect things to go based on it.
Two questions:
First, isn’t algorithmic trading a counterexample to your argument? It’s true that it’s a narrow domain, but it’s also one where AI systems are trusted with enormous sums of money, and have the potential to make enormous losses. E.g. one company apparently lost $440 million in less than an hour due to a glitch in their software. Wikipedia on the consequences:
Also, you give several examples of AGIs potentially making large mistakes with large consequences, but couldn’t e.g. a human strategist make a similarly big mistake as well?
You suggest that the corporate leadership could be held more responsible for a mistake by an AGI than if a human employer made the mistake, and I agree that this is definitely plausible. But I’m not sure whether it’s inevitable. If the AGI was initially treated the way a junior human employee would, i.e. initially kept subject to more supervision and given more limited responsibilities, and then had its responsibilities scaled up as people came to trust it more and it learned from its mistakes, would that necessarily be considered irresponsible by the shareholders and insurers? (There’s also the issue of privately held companies with no need to keep external shareholders satisfied.)
Kinda. They are carefully watched and have separate risk management systems which impose constraints and limits on what they can do.
Yes, but that has nothing to do with AI: “To err is human, but to really screw up you need a computer”. Besides, there are equivalent human errors (fat fingers, add a few zeros to a trade inadvertently) with equivalent magnitude of losses.
If those risk management systems are themselves software, that doesn’t really change the overall picture.
If we’re talking about “would companies place AI systems in a role where those systems could cost the company lots of money if they malfunctioned”, then examples of AI systems having been placed in roles where they cost the company a lot of money have everything to do with the discussion.
It does because the issue is complexity and opaqueness. A simple gatekeeper filter along the lines of
is not an “AI system”.
In which case the AI splits the transaction into 2 transactions, each just below a gazillion.
I’m talking about contemporary-level-of-technology trading systems, not about future malicious AIs.
So? An opaque neural net would quickly learn how to get around trade size restrictions if given the proper motivations.
At which point the humans running this NN will notice that it likes to go around risk control measures and will… persuade it that it’s a bad idea.
It’s not like no one is looking at the trades it’s doing.
How? By instituting more complex control measures? Then you’re back to the problem Kaj mentioned above.
In the usual way. Contemporary trading systems are not black boxes full of elven magic. They are models, that is, a bunch of code and some data. If the model doesn’t do what you want it to do, you stick your hands in there and twiddle the doohickeys until it stops outputting twaddle.
Besides, in most trading systems the sophisticated part (“AI”) is an oracle. Typically it outputs predictions (e.g. of prices of financial assets) and its utility function is some loss function on the difference between the prediction and the actual. It has no concept of trades, or dollars, or position limits.
Translating these predictions into trades is usually quite straightforward.