Yeah, a didactic problem with this post is that when I write everything out, the “reductive utility” position does not sound that tempting.
I still think it’s a really easy trap to fall into, though, because before thinking too much the assumption of a computable utility function sounds extremely reasonable.
Suppose I’m running a company, trying to maximize profits. I don’t make decisions by looking at the available options, and then estimating how profitable I expect the company to be under each choice. Rather, I reason locally: at a cost of X I can gain Y, I’ve cached an intuitive valuation of X and Y based on their first-order effects, and I make the choice based on that without reasoning through all the second-, third-, and higher-order effects of the choice. I don’t calculate all the way through to an expected utility or anything comparable to it.
With dynamic-programming inspired algorithms such as AlphaGo, “cached an intuitive valuation of X and Y” is modeled as a kind of approximate evaluation which is learned based on feedback—but feedback requires the ability to compute U() at some point. (So you don’t start out knowing how to evaluate uncertain situations, but you do start out knowing how to evaluate utility on completely specified worlds.)
So one might still reasonably assume you need to be able to compute U() despite this.
Yeah, a didactic problem with this post is that when I write everything out, the “reductive utility” position does not sound that tempting.
I still think it’s a really easy trap to fall into, though, because before thinking too much the assumption of a computable utility function sounds extremely reasonable.
With dynamic-programming inspired algorithms such as AlphaGo, “cached an intuitive valuation of X and Y” is modeled as a kind of approximate evaluation which is learned based on feedback—but feedback requires the ability to compute U() at some point. (So you don’t start out knowing how to evaluate uncertain situations, but you do start out knowing how to evaluate utility on completely specified worlds.)
So one might still reasonably assume you need to be able to compute U() despite this.
I actually found the position very tempting until I got to the subjective utility section.
Specifically, discontinuous utility functions have always seemed basically irrational to me, for reasons related to incomputability.