“Do X because decision theory” ~= “Do X because bayes theorem”
Decision theory is an extremely low-level tool for governing your interactions with other people, in the same way that physics is an extremely low-level tool for winning knife fights. Actual human behavior usually involves complex multi-agent interactions, where parties have limited visibility, work off of background priors about what kinds of other humans exist, do signaling and judge other peoples’ signals via an intuitive, evolved understanding of human social norms, and more. The math of agents generally only affects things, there, through the way it emerged unpredictably into object-level dynamics like economics, politics,, and friendship.
Most of the time, it’s best to just consider those macro-dynamics. When people instead justify their decisions through direct appeal to “decision theory”, they usually have to make many gross simplifications and assumptions. That’s because to do what they want reasonably they’d need the equivalent of precise knowledge about newtownian bodies, which they cannot actually infer directly.
Of course, using decision theory is sometimes appropriate—it’s math, after all, and math is sometimes useful. You can identify ways in which game theory affects everyday life—just like how an economist can identify how microeconomics shapes commerce, or a physicist can watch a ball roll down a hill. But when specialists use it (or Bayes theorem, or physics) to make decisions, they usually do so in the context of highly regular environments, like finance and foreign policy, where a team of “engineers” has advance time to reason explicitly about the playing field and develop rules and systems and recon and countermeasures.”MacGyver”ing your social interactions—if you are actually doing that and not just justifying regular reasoning via appeals to TDT because that sounds more impressive) - works about as well as using your advanced physics knowledge to build a rocket to get you to school.
Personally, the only time I’ve ever “used” decision theory explicitly and to do things I couldn’t justify otherwise was in the process of negotiating an important business deal. The circumstances in which I used it were critical to my choice to do so:
It was a high stakes scenario; meaning if I pursued a strategy ~10% more optimal it would be worth it, and if I made a huge mistake it might cost me a lot more.
I had advance time to reason about all of my decisions; if my counterparty gave me a document to sign or a counteroffer, I had enough space and motivation to work with a lawyer and analyze how things would play out.
I was negotiating with a competent counterparty, whom I did not have a lot of private information on, and did not have a lot of interpersonal interactions with.
I could use tools like mc-stan to turn my fuzzy probabilities and concerns into models with estimates.
Even then, game theory maybe represented ~10% of my decision making. The rest was basic regular social and legal stuff. And when I had an intuition that insisting on the “game theoretical” thing would make me look silly or signal something bad for human social reasons, I generally followed my intuitions, because those parts of your brain are already honed by evolution to consider these dynamics. What game theory was most helpful for was figuring out the answer when my instincts had none, in a regime where I lacked the means to do my regular personal assessments, rather than something I used to override my regular decision making.
Interesting. I’ve never really had a great grasp on decision theory and also don’t have a great grasp on why exactly decision theory is analogous to low-level physics or Bayes’ theorem, but at the same time, I sorta see it. And to that extent, I feel like it has moved my understanding of decision theory up a rung, which I appreciate.
Here’s where I’m at with my understanding of decision theory. As a starting point, you can say that it makes sense to maximize expected utility, with utility basically referring to “what you care about”. But then there’s weird situations like Newcomb’s problem where “maximize expected utility” arguably doesn’t lead to the outcomes you want, and so, arguably, you’d want to use a different decision theory in those situations. It’s always seemed to me like those situations are contrived though and don’t actually occur in real life. Which doesn’t mean decision theory is useless (from what I understand it’s super important for AI alignment) or not intellectually interesting, just that it seems (very) rare that it’d be useful.