I have not read this post super carefully, so apologies if I’ve misread, but I think this post equivocates between “epistemics” and “how to trade”. It may well be true that Figgie is better for teaching epistemics than poker, or that Figgie is good in an absolute sense! But I also think most interesting decision making under uncertainty is actually less adversarial than trading. Like most of the problems of are things like figuring out how to locate hypotheses (e.g. “which feature should I build?”, or “what explains the fact the data goes up and then down like that?” or “why won’t my car start?”), how and how much to pay for information (“who can we ask who will know about this?”, “is it worth paying for this independent analysis of my design for a swing in my backyard, or do I trust my own?”, “how much should I draw down user goodwill by throwing experimental features at them?”), or PvE bet-sizing (“how many times should I apply for a tenure track position before I give up” or “I got enough cheese and crackers for eight people, do you think that’s enough?”).
I agree that I’m conflating a few different teaching objectives, and there are dimensions of “epistemics” that that trading in general doesn’t teach. But on this I want to beg forgiveness on the grounds of, if I was fully recursively explicit about what I meant and didn’t mean by every term, the post would have been even longer than it was.
I do have another long post to write with working title “What They Don’t Teach You in Your Quant Trading Internship” about the ways that training in trading doesn’t prepare you for other important things in the world, or will actively interfere with having good intuitions elsewhere.
All that being said, I think that if you think “which feature should I build” doesn’t have something to learn from Toward a Broader Conception of Adverse Selection, I posit that there’s something missing.
I have not read this post super carefully, so apologies if I’ve misread, but I think this post equivocates between “epistemics” and “how to trade”. It may well be true that Figgie is better for teaching epistemics than poker, or that Figgie is good in an absolute sense! But I also think most interesting decision making under uncertainty is actually less adversarial than trading. Like most of the problems of are things like figuring out how to locate hypotheses (e.g. “which feature should I build?”, or “what explains the fact the data goes up and then down like that?” or “why won’t my car start?”), how and how much to pay for information (“who can we ask who will know about this?”, “is it worth paying for this independent analysis of my design for a swing in my backyard, or do I trust my own?”, “how much should I draw down user goodwill by throwing experimental features at them?”), or PvE bet-sizing (“how many times should I apply for a tenure track position before I give up” or “I got enough cheese and crackers for eight people, do you think that’s enough?”).
I agree that I’m conflating a few different teaching objectives, and there are dimensions of “epistemics” that that trading in general doesn’t teach. But on this I want to beg forgiveness on the grounds of, if I was fully recursively explicit about what I meant and didn’t mean by every term, the post would have been even longer than it was.
I do have another long post to write with working title “What They Don’t Teach You in Your Quant Trading Internship” about the ways that training in trading doesn’t prepare you for other important things in the world, or will actively interfere with having good intuitions elsewhere.
All that being said, I think that if you think “which feature should I build” doesn’t have something to learn from Toward a Broader Conception of Adverse Selection, I posit that there’s something missing.