Shane: Furthermore, your mind (I hope!) does indeed try to direct the future into certain limited supersets which you prefer.
Yes, it does. But I think we have to distinguish between “an agent who sometimes acts so as to produce a future possible world which is in a certain subset of possible states” and “an agent who has a utility function and who acts as an expected utility maximizer with respect to that utility function”. The former is applicable to any intelligent agent, the latter is not. Yes, I am aware of the expected utility theorem of von Neumann and Morgenstern, but I think that decision theory over a fixed set of possible world states and a fixed language for describing properties of those states is not applicable to a situations where, due to increasing intelligence, that fixed set of states quickly becomes outmoded. But this really deserves a good, thorough post of it’s own, but you can get some idea of what I am trying to say by reading ontologies, approximations and fundamentalists
Unfortunately, you haven’t actually said why you object to these things.
So, my first objection, stated more clearly, says that we can usefully consider agents who are not expected utility maximizers. Clearly there are agents who aren’t expected utility maximizers. It strikes me as dangerous to commit to building a superintelligent utility maximizer right now. I have my reasons for not liking utility maximizing agents; other people have their reasons for liking them, but at least let us keep the options open.
My second objection requires no further justification, and my third is really the same as the above: let us keep our options a bit more open.
Shane: Furthermore, your mind (I hope!) does indeed try to direct the future into certain limited supersets which you prefer.
Yes, it does. But I think we have to distinguish between “an agent who sometimes acts so as to produce a future possible world which is in a certain subset of possible states” and “an agent who has a utility function and who acts as an expected utility maximizer with respect to that utility function”. The former is applicable to any intelligent agent, the latter is not. Yes, I am aware of the expected utility theorem of von Neumann and Morgenstern, but I think that decision theory over a fixed set of possible world states and a fixed language for describing properties of those states is not applicable to a situations where, due to increasing intelligence, that fixed set of states quickly becomes outmoded. But this really deserves a good, thorough post of it’s own, but you can get some idea of what I am trying to say by reading ontologies, approximations and fundamentalists
Unfortunately, you haven’t actually said why you object to these things.
So, my first objection, stated more clearly, says that we can usefully consider agents who are not expected utility maximizers. Clearly there are agents who aren’t expected utility maximizers. It strikes me as dangerous to commit to building a superintelligent utility maximizer right now. I have my reasons for not liking utility maximizing agents; other people have their reasons for liking them, but at least let us keep the options open.
My second objection requires no further justification, and my third is really the same as the above: let us keep our options a bit more open.