I thought my excerpt answered that, but maybe that was illusion of transparency speaking. In particular, this paragraph:
In general, any broad domain involving high stakes, adversarial decision making and a need to act rapidly is likely to become increasingly dominated by autonomous systems. The extent to which the systems will need general intelligence will depend on the domain, but domains such as corporate management, fraud detection and warfare could plausibly make use of all the intelligence they can get. If oneʼs opponents in the domain are also using increasingly autonomous AI/AGI, there will be an arms race where one might have little choice but to give increasing amounts of control to AI/AGI systems.
To rephrase: the main trend is history has been to automate everything that can be automated, both to reduce costs and because machines can do things better than humans do. This isn’t going to stop: I’ve already seen articles calling for both company middle managers, as well as government bureaucrats, to be replaced with AIs. If you have any kind of a business, you could potentially make it run better by putting a sufficiently sophisticated AI in charge—because it can think faster and smarter, deal with more information at once, and not have the issue of self-interest leading to office politics leading to many employees acting suboptimally from the company’s point of view, that you’d get if you had a thousand human employees rather than a single AI.
This trend has been going on throughout history, doesn’t show any signs of stopping, and inherently involves giving the AI systems whatever agency they need in order to run the company better.
And if your competitors are having AIs run their company and you don’t, you’re likely to be outcompeted, so you’ll want to make sure your AIs are smarter and more capable of acting autonomously than the competitors. These pressures aren’t just going to vanish at the point when AIs start approaching human capability.
The same considerations also apply to other domains than business—like governance—but the business and military domains are the most likely to have intense arms race dynamics going on.
Yes, illusion of transparency at work here. That paragraph has always been so clearly wrong to me that I wrote it off as the usual academic prose fluff, and didn’t realize it was in fact the argument being made. Here is the issue I take with that:
You can find instances where industry is clamoring to use AI to reduce costs / improve productivity. For example, Uber and self-driving cars. However in these cases there are a combination of two factors at work: (1) the examples are necessarily specialized narrow AI, not general decision making; and/or (2) the costs of poor decision making are externalized. Let’s look at these points in more detail:
Anytime a human is being used as a meat robot, e.g. an Uber driver, a machine can do the job better and more efficiently with quantifiable tradeoffs due to the machine’s own quirks. However one must not forget that this is the case because the context has already been specialized! One can replace a minimum wage burger flipper with a machine because the job is part of a three-ring binder enterprise that has already been exhaustively thought out to such a degree that every component task can be taught to a minimum wage, distracted teenage worker. If the mechanical burger flipper fails, you go back to paying a $10/hr meat robot to do the trick. But what happens when the corporate strategy robot fails and the new product is a flop? You lose hundreds of millions of invested dollars. And worse, you don’t know until it is all over and played out. Not comparable at all.
Uber might want a fleet of self-driving cars. But that’s because the costs of being wrong are externalized. Get in an accident? It’s your driver’s problem, not Uber. Self-driving car get in an accident? It’s the owner of the car’s problem which, surprise, is not Uber. The applications of AGI have risks that are not so easily externalized, however.
I can see how one might think that unchecked AGI would improve the efficiency of corporate management, fraud detection, and warfare. However that’s confirmation bias. I assure you that the corporate strategists, fraud specialists, and generals get paid the big bucks to think about risk and the ways in which things can go wrong. I can give examples of what could go wrong when an alien AGI psychology tries to interact with irrational humans, but it’s much simpler to remember that even presumably superhuman AGIs have error rates, and these error rates will be higher than humans for a good duration of time while the technology is still developing. And what happens when an AGI makes a mistake?
A corporate strategist AGI makes a mistake, and the directors of the corporation who have a fiduciary responsibility to shareholders are held personally accountable. Indemnity insurance refuses to pay out as upper management purposefully took themselves out of the loop, an action that is considered irresponsible in hindsight.
A fraud specialist AGI makes a mistake, and its company turns a blind eye to hundreds of millions of dollars of fraud that a human would have seen. Business goes belly-up.
An war-making AGI makes a mistake, and you are now dead.
I hope that you’ll forgive me, but I must call on anecdotal evidence here. I am the co-founder of a startup that has raised >$75MM. I understand very well how investors, upper management, and corporate strategists manage risk. I also have observed how extremely terrified of additional risk they are. The supposition that they would be willing to put a high-risk proto-AGI in the driver’s seat is naïve to say the least. These are the people that are held accountable and suffer the largest losses when things go wrong, and they are terrified of that outcome.
What is likely to happen, on the other hand, is a hybridization of machine and human. AGI cognitive assistance will permeate these industries, but their job is to give recommendations, not steer things directly. And it’s not at all so clear to me that this approach, “Oracle AI” as it is called on LW, is so dangerous.
Thank you for the patient explanation! This is an interesting argument that I’ll have to think about some more, but I’ve already adjusted my view of how I expect things to go based on it.
Two questions:
First, isn’t algorithmic trading a counterexample to your argument? It’s true that it’s a narrow domain, but it’s also one where AI systems are trusted with enormous sums of money, and have the potential to make enormous losses. E.g. one company apparently lost $440 million in less than an hour due to a glitch in their software. Wikipedia on the consequences:
Knight Capital took a pre-tax loss of $440 million. This caused Knight Capital’s stock price to collapse, sending shares lower by over 70% from before the announcement. The nature of the Knight Capital’s unusual trading activity was described as a “technology breakdown”.[14][15]
On Sunday, August 5 the company managed to raise around $400 million from half a dozen investors led by Jefferies in an attempt to stay in business after the trading error. Jefferies’ CEO, Richard Handler and Executive Committee Chair Brian Friedman structured and led the rescue and Jefferies purchased $125 million of the $400 million investment and became Knight’s largest shareholder. [2]. The financing would be in the form of convertible securities, bonds that turn into equity in the company at a fixed price in the future.[16]
The incident was embarrassing for Knight CEO Thomas Joyce, who was an outspoken critic of Nasdaq’s handling of Facebook’s IPO.[17] On the same day the company’s stock plunged 33 percent, to $3.39; by the next day 75 percent of Knight’s equity value had been erased.[18]
Also, you give several examples of AGIs potentially making large mistakes with large consequences, but couldn’t e.g. a human strategist make a similarly big mistake as well?
You suggest that the corporate leadership could be held more responsible for a mistake by an AGI than if a human employer made the mistake, and I agree that this is definitely plausible. But I’m not sure whether it’s inevitable. If the AGI was initially treated the way a junior human employee would, i.e. initially kept subject to more supervision and given more limited responsibilities, and then had its responsibilities scaled up as people came to trust it more and it learned from its mistakes, would that necessarily be considered irresponsible by the shareholders and insurers? (There’s also the issue of privately held companies with no need to keep external shareholders satisfied.)
one where AI systems are trusted with enormous sums of money
Kinda. They are carefully watched and have separate risk management systems which impose constraints and limits on what they can do.
E.g. one company apparently lost $440 million in less than an hour due to a glitch in their software.
Yes, but that has nothing to do with AI: “To err is human, but to really screw up you need a computer”. Besides, there are equivalent human errors (fat fingers, add a few zeros to a trade inadvertently) with equivalent magnitude of losses.
have separate risk management systems which impose constraints and limits on what they can do.
If those risk management systems are themselves software, that doesn’t really change the overall picture.
Yes, but that has nothing to do with AI:
If we’re talking about “would companies place AI systems in a role where those systems could cost the company lots of money if they malfunctioned”, then examples of AI systems having been placed in roles where they cost the company a lot of money have everything to do with the discussion.
In the usual way. Contemporary trading systems are not black boxes full of elven magic. They are models, that is, a bunch of code and some data. If the model doesn’t do what you want it to do, you stick your hands in there and twiddle the doohickeys until it stops outputting twaddle.
Besides, in most trading systems the sophisticated part (“AI”) is an oracle. Typically it outputs predictions (e.g. of prices of financial assets) and its utility function is some loss function on the difference between the prediction and the actual. It has no concept of trades, or dollars, or position limits.
Translating these predictions into trades is usually quite straightforward.
I thought my excerpt answered that, but maybe that was illusion of transparency speaking. In particular, this paragraph:
To rephrase: the main trend is history has been to automate everything that can be automated, both to reduce costs and because machines can do things better than humans do. This isn’t going to stop: I’ve already seen articles calling for both company middle managers, as well as government bureaucrats, to be replaced with AIs. If you have any kind of a business, you could potentially make it run better by putting a sufficiently sophisticated AI in charge—because it can think faster and smarter, deal with more information at once, and not have the issue of self-interest leading to office politics leading to many employees acting suboptimally from the company’s point of view, that you’d get if you had a thousand human employees rather than a single AI.
This trend has been going on throughout history, doesn’t show any signs of stopping, and inherently involves giving the AI systems whatever agency they need in order to run the company better.
And if your competitors are having AIs run their company and you don’t, you’re likely to be outcompeted, so you’ll want to make sure your AIs are smarter and more capable of acting autonomously than the competitors. These pressures aren’t just going to vanish at the point when AIs start approaching human capability.
The same considerations also apply to other domains than business—like governance—but the business and military domains are the most likely to have intense arms race dynamics going on.
Yes, illusion of transparency at work here. That paragraph has always been so clearly wrong to me that I wrote it off as the usual academic prose fluff, and didn’t realize it was in fact the argument being made. Here is the issue I take with that:
You can find instances where industry is clamoring to use AI to reduce costs / improve productivity. For example, Uber and self-driving cars. However in these cases there are a combination of two factors at work: (1) the examples are necessarily specialized narrow AI, not general decision making; and/or (2) the costs of poor decision making are externalized. Let’s look at these points in more detail:
Anytime a human is being used as a meat robot, e.g. an Uber driver, a machine can do the job better and more efficiently with quantifiable tradeoffs due to the machine’s own quirks. However one must not forget that this is the case because the context has already been specialized! One can replace a minimum wage burger flipper with a machine because the job is part of a three-ring binder enterprise that has already been exhaustively thought out to such a degree that every component task can be taught to a minimum wage, distracted teenage worker. If the mechanical burger flipper fails, you go back to paying a $10/hr meat robot to do the trick. But what happens when the corporate strategy robot fails and the new product is a flop? You lose hundreds of millions of invested dollars. And worse, you don’t know until it is all over and played out. Not comparable at all.
Uber might want a fleet of self-driving cars. But that’s because the costs of being wrong are externalized. Get in an accident? It’s your driver’s problem, not Uber. Self-driving car get in an accident? It’s the owner of the car’s problem which, surprise, is not Uber. The applications of AGI have risks that are not so easily externalized, however.
I can see how one might think that unchecked AGI would improve the efficiency of corporate management, fraud detection, and warfare. However that’s confirmation bias. I assure you that the corporate strategists, fraud specialists, and generals get paid the big bucks to think about risk and the ways in which things can go wrong. I can give examples of what could go wrong when an alien AGI psychology tries to interact with irrational humans, but it’s much simpler to remember that even presumably superhuman AGIs have error rates, and these error rates will be higher than humans for a good duration of time while the technology is still developing. And what happens when an AGI makes a mistake?
A corporate strategist AGI makes a mistake, and the directors of the corporation who have a fiduciary responsibility to shareholders are held personally accountable. Indemnity insurance refuses to pay out as upper management purposefully took themselves out of the loop, an action that is considered irresponsible in hindsight.
A fraud specialist AGI makes a mistake, and its company turns a blind eye to hundreds of millions of dollars of fraud that a human would have seen. Business goes belly-up.
An war-making AGI makes a mistake, and you are now dead.
I hope that you’ll forgive me, but I must call on anecdotal evidence here. I am the co-founder of a startup that has raised >$75MM. I understand very well how investors, upper management, and corporate strategists manage risk. I also have observed how extremely terrified of additional risk they are. The supposition that they would be willing to put a high-risk proto-AGI in the driver’s seat is naïve to say the least. These are the people that are held accountable and suffer the largest losses when things go wrong, and they are terrified of that outcome.
What is likely to happen, on the other hand, is a hybridization of machine and human. AGI cognitive assistance will permeate these industries, but their job is to give recommendations, not steer things directly. And it’s not at all so clear to me that this approach, “Oracle AI” as it is called on LW, is so dangerous.
Thank you for the patient explanation! This is an interesting argument that I’ll have to think about some more, but I’ve already adjusted my view of how I expect things to go based on it.
Two questions:
First, isn’t algorithmic trading a counterexample to your argument? It’s true that it’s a narrow domain, but it’s also one where AI systems are trusted with enormous sums of money, and have the potential to make enormous losses. E.g. one company apparently lost $440 million in less than an hour due to a glitch in their software. Wikipedia on the consequences:
Also, you give several examples of AGIs potentially making large mistakes with large consequences, but couldn’t e.g. a human strategist make a similarly big mistake as well?
You suggest that the corporate leadership could be held more responsible for a mistake by an AGI than if a human employer made the mistake, and I agree that this is definitely plausible. But I’m not sure whether it’s inevitable. If the AGI was initially treated the way a junior human employee would, i.e. initially kept subject to more supervision and given more limited responsibilities, and then had its responsibilities scaled up as people came to trust it more and it learned from its mistakes, would that necessarily be considered irresponsible by the shareholders and insurers? (There’s also the issue of privately held companies with no need to keep external shareholders satisfied.)
Kinda. They are carefully watched and have separate risk management systems which impose constraints and limits on what they can do.
Yes, but that has nothing to do with AI: “To err is human, but to really screw up you need a computer”. Besides, there are equivalent human errors (fat fingers, add a few zeros to a trade inadvertently) with equivalent magnitude of losses.
If those risk management systems are themselves software, that doesn’t really change the overall picture.
If we’re talking about “would companies place AI systems in a role where those systems could cost the company lots of money if they malfunctioned”, then examples of AI systems having been placed in roles where they cost the company a lot of money have everything to do with the discussion.
It does because the issue is complexity and opaqueness. A simple gatekeeper filter along the lines of
is not an “AI system”.
In which case the AI splits the transaction into 2 transactions, each just below a gazillion.
I’m talking about contemporary-level-of-technology trading systems, not about future malicious AIs.
So? An opaque neural net would quickly learn how to get around trade size restrictions if given the proper motivations.
At which point the humans running this NN will notice that it likes to go around risk control measures and will… persuade it that it’s a bad idea.
It’s not like no one is looking at the trades it’s doing.
How? By instituting more complex control measures? Then you’re back to the problem Kaj mentioned above.
In the usual way. Contemporary trading systems are not black boxes full of elven magic. They are models, that is, a bunch of code and some data. If the model doesn’t do what you want it to do, you stick your hands in there and twiddle the doohickeys until it stops outputting twaddle.
Besides, in most trading systems the sophisticated part (“AI”) is an oracle. Typically it outputs predictions (e.g. of prices of financial assets) and its utility function is some loss function on the difference between the prediction and the actual. It has no concept of trades, or dollars, or position limits.
Translating these predictions into trades is usually quite straightforward.