FWIW, I wasn’t talking about CEV or superintelligent agents. I was just talking about the task of figuring out what your own goals were.
We can’t really coherently discuss in detail the difficulties of programming goals into superintelligent agents until we know how to build them. Programming one agent’s goals into a different agent looks challenging. Some devotees attempt to fulfill their guru’s desires—but that is a trickier problem than fulfilling their own desires—since they don’t get direct feedback from the guru’s senses. Anyway, these are all complications that I did not even pretend to be going into.
What do you actually mean when you say you “fail at step 1”. You have no idea what your own goals are?!? Or just that your knowledge of your own goals is somewhat incomplete?
I wasn’t talking about CEV or superintelligent agents either. I mean that I have no idea how to write down my own goals. I am nowhere close to having clearly specified goals for myself, in the sense that I as a mathematician usually mean “clearly specified”. The fact that I can’t describe my goals well enough that I could tell them to someone else and trust them to do what I want done is just one indication that my own conception of my goals is significantly incomplete.
OK. You do sound as though you don’t have very clearly-defined goals—though maybe there is some evasive word-play around the issue of what counts as a “clear” specification. Having goals is not rocket science! In any case, IMO, you would be well advised to start at number 1 on the above list.
How can you possibly get what you want if you don’t know what it is? It doesn’t matter whether you are looking to acquire wealth, enjoy excellent relationships, become more spiritual, etc. To get anywhere in life, you need to know “exactly” what you want.
FWIW, I wasn’t talking about CEV or superintelligent agents. I was just talking about the task of figuring out what your own goals were.
We can’t really coherently discuss in detail the difficulties of programming goals into superintelligent agents until we know how to build them. Programming one agent’s goals into a different agent looks challenging. Some devotees attempt to fulfill their guru’s desires—but that is a trickier problem than fulfilling their own desires—since they don’t get direct feedback from the guru’s senses. Anyway, these are all complications that I did not even pretend to be going into.
What do you actually mean when you say you “fail at step 1”. You have no idea what your own goals are?!? Or just that your knowledge of your own goals is somewhat incomplete?
I wasn’t talking about CEV or superintelligent agents either. I mean that I have no idea how to write down my own goals. I am nowhere close to having clearly specified goals for myself, in the sense that I as a mathematician usually mean “clearly specified”. The fact that I can’t describe my goals well enough that I could tell them to someone else and trust them to do what I want done is just one indication that my own conception of my goals is significantly incomplete.
OK. You do sound as though you don’t have very clearly-defined goals—though maybe there is some evasive word-play around the issue of what counts as a “clear” specification. Having goals is not rocket science! In any case, IMO, you would be well advised to start at number 1 on the above list.
http://www.best-self-help-sites.com/goal-setting.html