I trust the applicability of the symbols of expected utility theory less over time and trust common beliefs about the automatic implications of putting those symbols in a seed AI even less than that. Am I alone here?
The current theory is all fine—until you want to calculate utility based on something other than expected sensory input data. Then the current theory doesn’t work very well at all. The problem is that we don’t yet know how to code: “not what you are seeing, how the world really is” in a machine-readable format.
The current theory is all fine—until you want to calculate utility based on something other than expected sensory input data. Then the current theory doesn’t work very well at all. The problem is that we don’t yet know how to code: “not what you are seeing, how the world really is” in a machine-readable format.