In the usual way. Contemporary trading systems are not black boxes full of elven magic. They are models, that is, a bunch of code and some data. If the model doesn’t do what you want it to do, you stick your hands in there and twiddle the doohickeys until it stops outputting twaddle.
Besides, in most trading systems the sophisticated part (“AI”) is an oracle. Typically it outputs predictions (e.g. of prices of financial assets) and its utility function is some loss function on the difference between the prediction and the actual. It has no concept of trades, or dollars, or position limits.
Translating these predictions into trades is usually quite straightforward.
I’m talking about contemporary-level-of-technology trading systems, not about future malicious AIs.
So? An opaque neural net would quickly learn how to get around trade size restrictions if given the proper motivations.
At which point the humans running this NN will notice that it likes to go around risk control measures and will… persuade it that it’s a bad idea.
It’s not like no one is looking at the trades it’s doing.
How? By instituting more complex control measures? Then you’re back to the problem Kaj mentioned above.
In the usual way. Contemporary trading systems are not black boxes full of elven magic. They are models, that is, a bunch of code and some data. If the model doesn’t do what you want it to do, you stick your hands in there and twiddle the doohickeys until it stops outputting twaddle.
Besides, in most trading systems the sophisticated part (“AI”) is an oracle. Typically it outputs predictions (e.g. of prices of financial assets) and its utility function is some loss function on the difference between the prediction and the actual. It has no concept of trades, or dollars, or position limits.
Translating these predictions into trades is usually quite straightforward.