In short, impressive “logical” arguments about how probabilities of complements must be additive can only be justified in a vacuum without context, a situation that does not exist in the real world.
Every “context” can be described as a set of facts and parameters, AKA more data. Perfect data on the context means perfect information. Perfect information means perfect choice and perfect predictions. Sure, it might seem to you like the logical arguments expressed are “too basic to apply to the real world”, but a utility function is really only ever “wrong” when it fails to apply the correct utility to the correct element (“sorting out your priorities”), whether that’s by improper design, lack of self-awareness, missing information or some other hypothetical reason.
For every “no but theory doesn’t apply to the real world” or “theory and practice are different” argument, there is always an explanation for the proposed difference between theory and reality, and this explanation can be included in the theory. The point isn’t to throw out reality and use our own virtual-theoretical world. It’s to update our model (the theory) in the most sane and rational way, over and over again (constantly and continuously) so that we get better.
Likewise, maximizing one’s own utility function is not the reduce-oneself-to-machine-worshipper-of-the-machine-god that you seem to believe. I have emotions, I get angry, I get irritated (e.g. at your response*), I am happy, etc. Yet it appears that for several years, in hindsight, I’ve been maximizing my utility function without knowing that that’s how it’s called (and learning the terminology and more correct/formal ways of talking about it once I started reading LessWrong).
Your “utility function” is not one simple formula that you use to plug in values to variables, compute, and then call it a decision. The utility function of a person is the entire, general completeness of what that person wants and desires and values. If I tried to write down for you my own utility function, it would be both utterly incomprehensible and probably ridiculously ugly. That’s assuming I’d even be capable of writing it all down—limited self-awareness, biases, continuous change, and all that stuff.
To put it all in perspective, “maximizing one’s utility function” is very much equivalent to “according to what information you have, spend as much time as you think is worth taking deciding on the probably-best course of action available, and then act on it, such that in hindsight you’ll have maximized your chances of reaching your own objectives”. This doesn’t mean obtaining perfect information or never being wrong or worshipping a formula. It simply means living your own life, in your own way, with better (and improving) awareness of yourself and updating (changing) your own beliefs when they’re no longer correct so that you can act and behave more rationally. In this optic, LessWrong is essentially a large self-help group for normal people who just want to be better at knowing things and making decisions in general.
On a last note, FacingTheSingularity does not contain a bunch of scientific essays that would be the end answer to all singularity concerns. At best, it could be considered as one multi-chapter essay going through various points to support the primary thesis that the one author believes that the various experts are right about the Singularity being “imminent” (within this century at the outset). This is clearly stated on the front page, which is also the table of contents. As I’ve said in my previous reply, it’s a good vulgarized introduction. However, the real meat comes from the SingInst articles, essays and theses, as well as some of the more official stuff on LessWrong. Eliezer’s Timeless Decision Theory paper is a good example of more rigorous and technical writing, though it’s by far not the most relevant, nor do I think it’s the first one that a newcomer should read. If you’re interested in possible AI decision-making techniques, though, it’s a very interesting and pertinent reading.
*(I was slightly irritated that I failed to fully communicate my point and at the dismissal of long-thought-and-debated theories, including beliefs I’ve revalidated time and time again over the years, along with the childish comment on ex-christians and their “machine god”. This does not mean, however, that I transpose this irritation towards you or some other, unrelated outlet. My irritation is my own and a product of my own mental models.)
Edit: Fixed some of the text and added missing footnote.
Every “context” can be described as a set of facts and parameters, AKA more data. Perfect data on the context means perfect information. Perfect information means perfect choice and perfect predictions. Sure, it might seem to you like the logical arguments expressed are “too basic to apply to the real world”, but a utility function is really only ever “wrong” when it fails to apply the correct utility to the correct element (“sorting out your priorities”), whether that’s by improper design, lack of self-awareness, missing information or some other hypothetical reason.
For every “no but theory doesn’t apply to the real world” or “theory and practice are different” argument, there is always an explanation for the proposed difference between theory and reality, and this explanation can be included in the theory. The point isn’t to throw out reality and use our own virtual-theoretical world. It’s to update our model (the theory) in the most sane and rational way, over and over again (constantly and continuously) so that we get better.
Likewise, maximizing one’s own utility function is not the reduce-oneself-to-machine-worshipper-of-the-machine-god that you seem to believe. I have emotions, I get angry, I get irritated (e.g. at your response*), I am happy, etc. Yet it appears that for several years, in hindsight, I’ve been maximizing my utility function without knowing that that’s how it’s called (and learning the terminology and more correct/formal ways of talking about it once I started reading LessWrong).
Your “utility function” is not one simple formula that you use to plug in values to variables, compute, and then call it a decision. The utility function of a person is the entire, general completeness of what that person wants and desires and values. If I tried to write down for you my own utility function, it would be both utterly incomprehensible and probably ridiculously ugly. That’s assuming I’d even be capable of writing it all down—limited self-awareness, biases, continuous change, and all that stuff.
To put it all in perspective, “maximizing one’s utility function” is very much equivalent to “according to what information you have, spend as much time as you think is worth taking deciding on the probably-best course of action available, and then act on it, such that in hindsight you’ll have maximized your chances of reaching your own objectives”. This doesn’t mean obtaining perfect information or never being wrong or worshipping a formula. It simply means living your own life, in your own way, with better (and improving) awareness of yourself and updating (changing) your own beliefs when they’re no longer correct so that you can act and behave more rationally. In this optic, LessWrong is essentially a large self-help group for normal people who just want to be better at knowing things and making decisions in general.
On a last note, FacingTheSingularity does not contain a bunch of scientific essays that would be the end answer to all singularity concerns. At best, it could be considered as one multi-chapter essay going through various points to support the primary thesis that the one author believes that the various experts are right about the Singularity being “imminent” (within this century at the outset). This is clearly stated on the front page, which is also the table of contents. As I’ve said in my previous reply, it’s a good vulgarized introduction. However, the real meat comes from the SingInst articles, essays and theses, as well as some of the more official stuff on LessWrong. Eliezer’s Timeless Decision Theory paper is a good example of more rigorous and technical writing, though it’s by far not the most relevant, nor do I think it’s the first one that a newcomer should read. If you’re interested in possible AI decision-making techniques, though, it’s a very interesting and pertinent reading.
*(I was slightly irritated that I failed to fully communicate my point and at the dismissal of long-thought-and-debated theories, including beliefs I’ve revalidated time and time again over the years, along with the childish comment on ex-christians and their “machine god”. This does not mean, however, that I transpose this irritation towards you or some other, unrelated outlet. My irritation is my own and a product of my own mental models.)
Edit: Fixed some of the text and added missing footnote.