Well, this site doesn’t have many posts that say ‘this is a modern controversial issue where lots of people have different opinions, but here is my argument for saying x is the correct one’
There are posts on this site about decision theory, quantum mechanics, time, statistics/probability theory, charity, artificial intelligence, meta-ethics and cryonics that all seem to fit this bill. I’m sure I’m missing other topics. I do agree that most of the directly rationality-related posts aren’t presenting particularly controversial ideas.
now I think you need to just update on evidence and maximise expected utility, rather than holding on to a single idea. I’ve changed my mind about my life goals (from ‘be happy’ to ‘save the world’).
Is this change in your life goals merely a consequence of updating on evidence and maximizing expected utility? It sounds to me more like a change in your utility function itself.
I didn’t hold any beliefs (never mind strong ones) about decision theory, quantum mechanics, time, probability theory, AI or meta-ethics before I came here. I think I disapproved of cryonics as much as the next person, although now I think it looks a lot better. And I imagine I’m typical, as these topics aren’t especially controversial in the public sphere, so I don’t think many people do.
Is this change in your life goals merely a consequence of updating on evidence and maximizing expected utility? It sounds to me more like a change in your utility function itself.
This looks like a confusion of words. I mean, my utility function didn’t actually change—no one performed surgery on my brain. I learned about what was important to me, changed my mind about what I value, from arguments at LW. But this should be seen as me better understanding what I do indeed value (or would do if it were that I could understand myself better or something (cf. CEV)).
I mean, my utility function didn’t actually change—no one performed surgery on my brain.
Maybe this is a terminological confusion, but often clearing up terminological confusion matters, especially if it involves terminology that has widespread scientific use.
I use “utility function” in its usual decision-theoretic sense. A utility function is a theoretical entity that is part of a model (the VNM decision model) that can be used to (imperfectly) predict and prescribe a partially rational agent’s behavior. On this conception, a utility function is just an encapsulation of an agent’s preferences, idealized in certain ways, as revealed by their choice behavior. There’s no commitment to the utility function corresponding to some specific psychological entity, like some sort of script in the brain that determines the agent’s choice behavior. This seems to be different from the way you’re using the phrase, but it’s worth pointing out that in economics and decision theory “utility function” is just used in this minimal sense. Utilities and utility functions are not supposed to be psychological causes of behavior; they are merely idealized mathematical formalizations of the behavior. Decision theorists avoid the causal notion of utility (Ken Binmore calls it the “causal utility fallacy”) because it makes substantive assumptions about how our brains works, assumptions that have not yet been borne out by psychology.
So on this view, changing one’s utility function does not require brain surgery. It happens all the time naturally. If your choice behavior changes enough, then your utility function has changed. Also, it’s not clear what it would mean to “discover” some part of your pre-existing utility function that you had not known about before. If your choice behavior prior to this “discovery” was different, then the “discovery” is actually just a change in the utility function, not a realization that you weren’t actually adhering to your pre-existing utility function.
Alright—I’m only just starting my decision theory textbook so I’ll readily admit I probably used the word incorrectly. But I thought that the point you were making is that LessWrong hadn’t changed my thinking on a major topic if it was my utility function that had changed. Under this definition though, of a utility function just being the most compact description of what your acts imply you value, changing my beliefs (in the informal meaning) has caused my utility function to change. LW has changed my thinking on the important topic of life goals, and that’s the point I was making.
There are posts on this site about decision theory, quantum mechanics, time, statistics/probability theory, charity, artificial intelligence, meta-ethics and cryonics that all seem to fit this bill. I’m sure I’m missing other topics. I do agree that most of the directly rationality-related posts aren’t presenting particularly controversial ideas.
Is this change in your life goals merely a consequence of updating on evidence and maximizing expected utility? It sounds to me more like a change in your utility function itself.
I didn’t hold any beliefs (never mind strong ones) about decision theory, quantum mechanics, time, probability theory, AI or meta-ethics before I came here. I think I disapproved of cryonics as much as the next person, although now I think it looks a lot better. And I imagine I’m typical, as these topics aren’t especially controversial in the public sphere, so I don’t think many people do.
This looks like a confusion of words. I mean, my utility function didn’t actually change—no one performed surgery on my brain. I learned about what was important to me, changed my mind about what I value, from arguments at LW. But this should be seen as me better understanding what I do indeed value (or would do if it were that I could understand myself better or something (cf. CEV)).
Maybe this is a terminological confusion, but often clearing up terminological confusion matters, especially if it involves terminology that has widespread scientific use.
I use “utility function” in its usual decision-theoretic sense. A utility function is a theoretical entity that is part of a model (the VNM decision model) that can be used to (imperfectly) predict and prescribe a partially rational agent’s behavior. On this conception, a utility function is just an encapsulation of an agent’s preferences, idealized in certain ways, as revealed by their choice behavior. There’s no commitment to the utility function corresponding to some specific psychological entity, like some sort of script in the brain that determines the agent’s choice behavior. This seems to be different from the way you’re using the phrase, but it’s worth pointing out that in economics and decision theory “utility function” is just used in this minimal sense. Utilities and utility functions are not supposed to be psychological causes of behavior; they are merely idealized mathematical formalizations of the behavior. Decision theorists avoid the causal notion of utility (Ken Binmore calls it the “causal utility fallacy”) because it makes substantive assumptions about how our brains works, assumptions that have not yet been borne out by psychology.
So on this view, changing one’s utility function does not require brain surgery. It happens all the time naturally. If your choice behavior changes enough, then your utility function has changed. Also, it’s not clear what it would mean to “discover” some part of your pre-existing utility function that you had not known about before. If your choice behavior prior to this “discovery” was different, then the “discovery” is actually just a change in the utility function, not a realization that you weren’t actually adhering to your pre-existing utility function.
Alright—I’m only just starting my decision theory textbook so I’ll readily admit I probably used the word incorrectly. But I thought that the point you were making is that LessWrong hadn’t changed my thinking on a major topic if it was my utility function that had changed. Under this definition though, of a utility function just being the most compact description of what your acts imply you value, changing my beliefs (in the informal meaning) has caused my utility function to change. LW has changed my thinking on the important topic of life goals, and that’s the point I was making.