I think in part we can get away with it because it’s possible to optimize for things that are only usually decidable.
Take winning the war for example. There may be no computer program that could look at any state of the world and tell you who won the war—there are lots of weird edge cases that could cause a Turing machine to not return a decision. But if we expect to be able to tell who won the war with very high probability (or have a model that we think matches who wins the war with high probability), then we can just sort of ignore the weird edge cases and model failures when calculating an expected utility.
I think in part we can get away with it because it’s possible to optimize for things that are only usually decidable.
Take winning the war for example. There may be no computer program that could look at any state of the world and tell you who won the war—there are lots of weird edge cases that could cause a Turing machine to not return a decision. But if we expect to be able to tell who won the war with very high probability (or have a model that we think matches who wins the war with high probability), then we can just sort of ignore the weird edge cases and model failures when calculating an expected utility.