As a result, a lot of programmers at HFT firms spend most of their time trying to keep the software from running away. They create elaborate safeguard systems to form a walled garden around the traders but, exactly like a human trader, the programs know that they make money by being novel, doing things that other traders haven’t thought of. These gatekeeper programs are therefore under constant, hectic development as new algorithms are rolled out. The development pace necessitates that they implement only the most important safeguards, which means that certain types of algorithmic behavior can easily pass through. As has been pointed out by others, these were “quotes” not “trades”, and they were far away from the inside price—therefore not something the risk management software would be necessarily be looking for.
—comment from gameDevNYC
I can’t evaluate whether what he’s saying is plausible enough for science fiction—it’s certainly that—or likely to be true.
One of the facts about ‘hard’ AI, as is required for profitable NLP, is that the coders who developed it don’t even understand completely how it works. If they did, it would just be a regular program.
TLDR: this definitely is emergent behavior—it is information passing between black-box algorithms with motivations that even the original programmers cannot make definitive statements about.
AI development in the real world?
I can’t evaluate whether what he’s saying is plausible enough for science fiction—it’s certainly that—or likely to be true.
Yuck.