The thing is, this ‘modern decision theory’, rather than being some sort of central pillar as you’d assume from the name, is mostly philosophers “struggling in the periphery to try to tell us something”, as Feynman once said about philosophers of science.
When it comes to any actual software which does something, this everyday notion of ‘causality’ proves to be a very slippery concept. This Rude Goldberg machine—like model of the world, where you push a domino and it pushes another domino, and the chain goes to your reward, that’s just very approximate physics that people tend to use to make decisions, it’s not fundamental, and interesting models of decision making are generally set up to learn that from observed data (which of course makes it impossible to do lazy philosophy involving various verbal hypotheticals where the observations that would lead the agent to believe the problem set up are not specified).
The thing is, this ‘modern decision theory’, rather than being some sort of central pillar as you’d assume from the name, is mostly philosophers “struggling in the periphery to try to tell us something”, as Feynman once said about philosophers of science.
When it comes to any actual software which does something, this everyday notion of ‘causality’ proves to be a very slippery concept. This Rude Goldberg machine—like model of the world, where you push a domino and it pushes another domino, and the chain goes to your reward, that’s just very approximate physics that people tend to use to make decisions, it’s not fundamental, and interesting models of decision making are generally set up to learn that from observed data (which of course makes it impossible to do lazy philosophy involving various verbal hypotheticals where the observations that would lead the agent to believe the problem set up are not specified).