The Drexlerian response you generated is a restatement of EMH. Everyone is trying to act in the way maximally beneficial for themselves, you end up with stable equilibrium.
You’re exactly right, if everyone separately has GPT-4 trying to make money for themselves, each copy will exhaust gradients and ultimately find no alpha, which is why the minimum fee index fund becomes the best you can do.
Problems occur when you have collusion or only a few players. The flaw here is everyone using the same model, or derivatives of the same model, you could manipulate the market by generating fake news that scams every GPT-n into acting in the same predictable way.
This is ironically an exploitable flaw in any rational algorithm.
If you think about it, you would finetune the GPT-n over many RL examples to anneal to the weights that encode the optimal trading policy. There is one and only one optimal trading policy that maximizes profits, over an infinite timespan/infinite computes, all models training on the same RL examples and architecture will converge on the same policy.
This policy will be thus predictable and exploitable, unless it is able to encode deliberate randomness into it’s decision making, which is something you can do though current network architectures don’t precisely have that baked in.
In any case, the science fiction story of one superintelligent model tasked with making it’s owner “infinitely rich” and it proceeds to
exploit flaws in the market to collect alpha, then it recursively makes larger and larger trades
reaches a balance where it can directly manipulate the market
reaches a balance where it can buy entire companies,
reaches a balance where it can fund robotics and other capabilities research with purchased companies
sells new ASI made products to get the funds to buy every other company on earth
launches hunter killer drones to kill everyone on earth but the owner so that it can then set the owner’s account balance to infinity without anyone to ever correct it.
Can’t work if there’s no free energy to do this.. It wouldn’t get past the first step because 5000 other competing systems rob it of alpha and it goes back to index funds. At higher levels the same thing, other competing systems tell humans on it, or sell their own military services to stop it, or file antitrust lawsuits that cause it to lose ownership of too much of the economy.
The Drexlerian response you generated is a restatement of EMH. Everyone is trying to act in the way maximally beneficial for themselves, you end up with stable equilibrium.
You’re exactly right, if everyone separately has GPT-4 trying to make money for themselves, each copy will exhaust gradients and ultimately find no alpha, which is why the minimum fee index fund becomes the best you can do.
Problems occur when you have collusion or only a few players. The flaw here is everyone using the same model, or derivatives of the same model, you could manipulate the market by generating fake news that scams every GPT-n into acting in the same predictable way.
This is ironically an exploitable flaw in any rational algorithm.
If you think about it, you would finetune the GPT-n over many RL examples to anneal to the weights that encode the optimal trading policy. There is one and only one optimal trading policy that maximizes profits, over an infinite timespan/infinite computes, all models training on the same RL examples and architecture will converge on the same policy.
This policy will be thus predictable and exploitable, unless it is able to encode deliberate randomness into it’s decision making, which is something you can do though current network architectures don’t precisely have that baked in.
In any case, the science fiction story of one superintelligent model tasked with making it’s owner “infinitely rich” and it proceeds to
exploit flaws in the market to collect alpha, then it recursively makes larger and larger trades
reaches a balance where it can directly manipulate the market
reaches a balance where it can buy entire companies,
reaches a balance where it can fund robotics and other capabilities research with purchased companies
sells new ASI made products to get the funds to buy every other company on earth
launches hunter killer drones to kill everyone on earth but the owner so that it can then set the owner’s account balance to infinity without anyone to ever correct it.
Can’t work if there’s no free energy to do this.. It wouldn’t get past the first step because 5000 other competing systems rob it of alpha and it goes back to index funds. At higher levels the same thing, other competing systems tell humans on it, or sell their own military services to stop it, or file antitrust lawsuits that cause it to lose ownership of too much of the economy.