I think there’s a trilemma with updating CAIS-like systems to the foundational model world, which is: who is doing the business development?
I came up with three broad answers (noting reality will possibly be a mixture):
The AGI. This is like Sam Altman’s old answer or a sovereign AI; you just ask it to look at the world, figure out what tasks it should do, and then do them.
The company that made the AI. I think this was DeepMind’s plan; come up with good AI algorithms, find situations where those algorithms can be profitably deployed, and then deploy them, maybe with a partner.
The broader economy. This looks to me like OpenAI’s current plan; release an API and encourage people to build companies on top of it.
[In pre-foundational-model CAIS, the answer was obviously 3--every business procures its own AI tools to accomplish particular functions, and there’s no ‘central planning’ for computer science.]
I don’t think 1 is CAIS, or if it is, then I don’t see the daylight between CAIS and good ol’ sovereign AI. You gradually morph from the economy as it is now to central planning via AGI, and I don’t think you even have much guarantee that it’s human overseen or follows the relevant laws.
I think 2 has trouble being comprehensive. There are ten thousand use cases for AI; the AI company has to be massive to have a product for all of them (or be using the AI to do most of the work, in which case we’re degenerating into case 1), and then it suffers from internal control problems. (This degenerates into case 3, where individual product teams are like firms and the company that made the AI is like the government.)
I think 3 has trouble being non-agentic and peaceful. Even with GPT-4, people are trying to set it up to act autonomously. I think the Drexlerian response here is something like:
Yes, but why expect them to succeed? When someone tells GPT-4 to make money for it, it’ll attempt to deploy some standard strategy, which will fail because a million other people are trying to exact same thing, or only get them an economic rate of return (“put your money in index funds!”). Only in situations where the human operators have a private edge on the rest of the economy (like having a well-built system targeting an existing vertical that the AI can slot into, you have pre-existing tooling able to orient to the frontier of human knowledge, etc.) will you get an AI system with a private edge against the rest of the economy, and it’ll be overseen by humans.
My worry here mostly has to do with the balance between offense and defense. If foundational-model-enabled banking systems are able to detect fraud as easily as foundational-model-enabled criminals are able to create fraud, then we get a balance like today’s and things are ‘normal’. But it’s not obvious to me that this will be the case (especially in sectors where crime is better resourced than police are, or sectors where systems are difficult to harden).
That said, I do think I’m more optimistic about the foundational model version of CAIS (where there can be some centralized checks on what the AI is doing for users) than the widespread AI development version.
The Drexlerian response you generated is a restatement of EMH. Everyone is trying to act in the way maximally beneficial for themselves, you end up with stable equilibrium.
You’re exactly right, if everyone separately has GPT-4 trying to make money for themselves, each copy will exhaust gradients and ultimately find no alpha, which is why the minimum fee index fund becomes the best you can do.
Problems occur when you have collusion or only a few players. The flaw here is everyone using the same model, or derivatives of the same model, you could manipulate the market by generating fake news that scams every GPT-n into acting in the same predictable way.
This is ironically an exploitable flaw in any rational algorithm.
If you think about it, you would finetune the GPT-n over many RL examples to anneal to the weights that encode the optimal trading policy. There is one and only one optimal trading policy that maximizes profits, over an infinite timespan/infinite computes, all models training on the same RL examples and architecture will converge on the same policy.
This policy will be thus predictable and exploitable, unless it is able to encode deliberate randomness into it’s decision making, which is something you can do though current network architectures don’t precisely have that baked in.
In any case, the science fiction story of one superintelligent model tasked with making it’s owner “infinitely rich” and it proceeds to
exploit flaws in the market to collect alpha, then it recursively makes larger and larger trades
reaches a balance where it can directly manipulate the market
reaches a balance where it can buy entire companies,
reaches a balance where it can fund robotics and other capabilities research with purchased companies
sells new ASI made products to get the funds to buy every other company on earth
launches hunter killer drones to kill everyone on earth but the owner so that it can then set the owner’s account balance to infinity without anyone to ever correct it.
Can’t work if there’s no free energy to do this.. It wouldn’t get past the first step because 5000 other competing systems rob it of alpha and it goes back to index funds. At higher levels the same thing, other competing systems tell humans on it, or sell their own military services to stop it, or file antitrust lawsuits that cause it to lose ownership of too much of the economy.
I think there’s a trilemma with updating CAIS-like systems to the foundational model world, which is: who is doing the business development?
I came up with three broad answers (noting reality will possibly be a mixture):
The AGI. This is like Sam Altman’s old answer or a sovereign AI; you just ask it to look at the world, figure out what tasks it should do, and then do them.
The company that made the AI. I think this was DeepMind’s plan; come up with good AI algorithms, find situations where those algorithms can be profitably deployed, and then deploy them, maybe with a partner.
The broader economy. This looks to me like OpenAI’s current plan; release an API and encourage people to build companies on top of it.
[In pre-foundational-model CAIS, the answer was obviously 3--every business procures its own AI tools to accomplish particular functions, and there’s no ‘central planning’ for computer science.]
I don’t think 1 is CAIS, or if it is, then I don’t see the daylight between CAIS and good ol’ sovereign AI. You gradually morph from the economy as it is now to central planning via AGI, and I don’t think you even have much guarantee that it’s human overseen or follows the relevant laws.
I think 2 has trouble being comprehensive. There are ten thousand use cases for AI; the AI company has to be massive to have a product for all of them (or be using the AI to do most of the work, in which case we’re degenerating into case 1), and then it suffers from internal control problems. (This degenerates into case 3, where individual product teams are like firms and the company that made the AI is like the government.)
I think 3 has trouble being non-agentic and peaceful. Even with GPT-4, people are trying to set it up to act autonomously. I think the Drexlerian response here is something like:
My worry here mostly has to do with the balance between offense and defense. If foundational-model-enabled banking systems are able to detect fraud as easily as foundational-model-enabled criminals are able to create fraud, then we get a balance like today’s and things are ‘normal’. But it’s not obvious to me that this will be the case (especially in sectors where crime is better resourced than police are, or sectors where systems are difficult to harden).
That said, I do think I’m more optimistic about the foundational model version of CAIS (where there can be some centralized checks on what the AI is doing for users) than the widespread AI development version.
The Drexlerian response you generated is a restatement of EMH. Everyone is trying to act in the way maximally beneficial for themselves, you end up with stable equilibrium.
You’re exactly right, if everyone separately has GPT-4 trying to make money for themselves, each copy will exhaust gradients and ultimately find no alpha, which is why the minimum fee index fund becomes the best you can do.
Problems occur when you have collusion or only a few players. The flaw here is everyone using the same model, or derivatives of the same model, you could manipulate the market by generating fake news that scams every GPT-n into acting in the same predictable way.
This is ironically an exploitable flaw in any rational algorithm.
If you think about it, you would finetune the GPT-n over many RL examples to anneal to the weights that encode the optimal trading policy. There is one and only one optimal trading policy that maximizes profits, over an infinite timespan/infinite computes, all models training on the same RL examples and architecture will converge on the same policy.
This policy will be thus predictable and exploitable, unless it is able to encode deliberate randomness into it’s decision making, which is something you can do though current network architectures don’t precisely have that baked in.
In any case, the science fiction story of one superintelligent model tasked with making it’s owner “infinitely rich” and it proceeds to
exploit flaws in the market to collect alpha, then it recursively makes larger and larger trades
reaches a balance where it can directly manipulate the market
reaches a balance where it can buy entire companies,
reaches a balance where it can fund robotics and other capabilities research with purchased companies
sells new ASI made products to get the funds to buy every other company on earth
launches hunter killer drones to kill everyone on earth but the owner so that it can then set the owner’s account balance to infinity without anyone to ever correct it.
Can’t work if there’s no free energy to do this.. It wouldn’t get past the first step because 5000 other competing systems rob it of alpha and it goes back to index funds. At higher levels the same thing, other competing systems tell humans on it, or sell their own military services to stop it, or file antitrust lawsuits that cause it to lose ownership of too much of the economy.