What I meant with “obscure” is that both “true utility function” and “utility function that encodes the optimal actions to take for the best possible universe” have normative terminology in them that I don’t know how to reduce or operationalize.
Oh yeah, I was definitely speaking normatively there.
For instance, imagine I am looking at action sequences and ranking them. Presumably large portions of that process would feel like difficult judgment calls where I’d feel nervous about still making some kind of mistake.
Agreed, I’m just saying that in principle there exists some “best” way of making those calls.
Both your phrasings (to my ears) carry the connotation that there is a “best” mistake model, one which is in a relevant sense independent from our own judgment
Agreed that I’m assuming that there is a “best” mistake model, I wouldn’t say that it has to be independent from our own judgment.
Oh yeah, I was definitely speaking normatively there.
Agreed, I’m just saying that in principle there exists some “best” way of making those calls.
Agreed that I’m assuming that there is a “best” mistake model, I wouldn’t say that it has to be independent from our own judgment.