My model of Eliezer says that there is some deep underlying concept of consequentialism, of which the “not very coherent consequentialism” is a distorted reflection; and that this deep underlying concept is very closely related to expected utility theory. (I believe he said at one point that he started using the word “consequentialism” instead of “expected utility maximisation” mainly because people kept misunderstanding what he meant by the latter.)
I don’t know enough about conservative vector fields to comment, but on priors I’m pretty skeptical of this being a good example of coherent utilities; I also don’t have a good guess about what Eliezer would say here.
I don’t know enough about conservative vector fields to comment, but on priors I’m pretty skeptical of this being a good example of coherent utilities; I also don’t have a good guess about what Eliezer would say here.
I think johnswentworth (and others) are claiming that they have the same ‘math’/‘shape’, which seems much more likely (if you trust their claims about such things generally).
My model of Eliezer says that there is some deep underlying concept of consequentialism, of which the “not very coherent consequentialism” is a distorted reflection; and that this deep underlying concept is very closely related to expected utility theory. (I believe he said at one point that he started using the word “consequentialism” instead of “expected utility maximisation” mainly because people kept misunderstanding what he meant by the latter.)
I don’t know enough about conservative vector fields to comment, but on priors I’m pretty skeptical of this being a good example of coherent utilities; I also don’t have a good guess about what Eliezer would say here.
I think johnswentworth (and others) are claiming that they have the same ‘math’/‘shape’, which seems much more likely (if you trust their claims about such things generally).