Well, let me explain what I mean by that; I’m using the word opinion in an idiosyncratic sense which I hope will be made clear by the following. Of course it makes sense to have beliefs about the world (let’s say, e.g., “If the legislature passes the Foobar Act, then it will reduce the budget deficit and save thousands of quality-adjusted life-years”), and it makes sense to have preference orderings over states of the world (e.g., “I would prefer to live in worlds with smaller budget deficits and people living longer, healthier lives”). But the opinionated state of mind which passionately insists, “The legislature should pass the Foobar Act!” in the absence of any actual planning to bring about that outcome, is a confusion, a waste of cognition (insofar as we construe the function of thought as to select actions). The legislature has no reason to know that I exist, so why bother getting so upset about whatever they’re going to do anyway, with or without my approval? Insofar as someone cares what I think, I’m happy to tell them what’s on my mind, or even to argue (in the sense of offering arguments, not in the sense of trying to win). But, you know, to a first approximation, no one cares what I think. They really don’t. Why pretend otherwise?
Let me preempt some potential misunderstandings. I’m not simply reiterating that politics is the mindkiller; I think the argument carries even if an ideal reasoner would agree that the Foobar Act really would be a good thing if passed. And I’m not saying that one should never participate in any collective project in which one person’s effort is unlikely to make a critical difference; there are certainly reasons why someone might want to do that (e.g., timeless-decision-theoretic “I should decide as if deciding for all instances of this decision algorithm” reasoning might apply, or a small probability of a large impact might be worth it in expectation, &c.). But notice these possible reasons for participating in a collective project are not most people’s actual reasons for having opinions about things they’re not going to affect.
Of course, integrating this insight into my thoughts will make it even harder than it already is for me to communicate with people who haven’t spent the last five years obsessing about the nature of rationality … but the other thing I need to stop doing is expecting to be able to communicate with arbitrary people on arbitrary topics.
For the past five years, I’ve spent a lot of time being really upset and angry and offended and mindkilled that most people in society seem to systematically conflate schooling (enrolling in courses and obeying the commands issued by the designated teacher) and that which I would call education (learning important things by whatever means). I still judge this to be a worthy sentiment—I really would prefer to live in a world with more authenticity, and existing schools still seem really terrible in contrast to what people can do for themselves when they’re really motivated—but it’s only now that I’m starting to see (really see, not just dutifully mouth the words) that my behavior of being outraged all the time wasn’t actually contributing to that goal, that visibly resenting the fact that other people don’t care about the things that I care about is pointless: it doesn’t help them, and it doesn’t help me. Better that I should just learn to lie (as part of the general trend where, surprisingly, I become a better person as I become less principled), or at least, to not wear my heart on my sleeve all the damned time.
I’m really confused; I want to say that I feel as if my brain has wandered into a slightly different state of consciousness that I’m not used to. I thought this rationality stuff was cool and all, and I thought I understood it pretty well—couldn’t I speak fluently about the same things everyone else was talking about?---but suddenly over the past several days, as I’ve tried to apply the intelligence-as-optimization viewpoint to my personal life problems (q.v. the parent and my comment on lying), it starts to feel as if I’m actually starting to sort of get it. I seem to feel reluctant to report this (notice all the hedging words: “seem,” “as if,” “want to say,” &c.), because introspection is unreliable, and verbal self-reports are unreliable, and I seem to have this thing where I feel reluctant to endorse statements that could be construed to imply that I should have higher status, and there have certainly been occasions in the past where I thought I had a life-altering epiphany and I turned out to be mistaken. So maybe you shouldn’t believe me … but that’s just the thing: this entire idea of believing or not-believing natural language propositions can’t be how intelligence actually works, and maybe the reason I feel the need to use all these hedging words is because it’s becoming more salient to me that I really don’t know what’s actually going on when I think; I’m writing these words, but I no longer feel sure what it means to believe them.
I want to say that I ought to be scared about the whole AI existential risk thing, but I’m not—and, come to think of it, as a matter of cause and effect, my being scared won’t actually help except insofar as it motivates me to do something helpful. I’d really rather just not think about it at all anymore. Of course, not-thinking about a risk doesn’t make it go away, but we should make a distinction between not-thinking-about something as a way of denying reality, and not-thinking-about something as a reallocation of cognitive resources: if I spend my own thinking time on fun, safe, selfish ideas, but learn how to make some money, and use some of the money to help fund people who are better at thinking than me to work on the scary confusing world-destroying problems, isn’t that good enough? Isn’t that morally acceptable? Of course, these ideas of “enough” and “morally acceptable” don’t exist in decision theory, either, but I doubt it’s psychologically realistic to function without them, and I don’t think I actually want to.
I think I’m going to stop having opinions.
Well, let me explain what I mean by that; I’m using the word opinion in an idiosyncratic sense which I hope will be made clear by the following. Of course it makes sense to have beliefs about the world (let’s say, e.g., “If the legislature passes the Foobar Act, then it will reduce the budget deficit and save thousands of quality-adjusted life-years”), and it makes sense to have preference orderings over states of the world (e.g., “I would prefer to live in worlds with smaller budget deficits and people living longer, healthier lives”). But the opinionated state of mind which passionately insists, “The legislature should pass the Foobar Act!” in the absence of any actual planning to bring about that outcome, is a confusion, a waste of cognition (insofar as we construe the function of thought as to select actions). The legislature has no reason to know that I exist, so why bother getting so upset about whatever they’re going to do anyway, with or without my approval? Insofar as someone cares what I think, I’m happy to tell them what’s on my mind, or even to argue (in the sense of offering arguments, not in the sense of trying to win). But, you know, to a first approximation, no one cares what I think. They really don’t. Why pretend otherwise?
Let me preempt some potential misunderstandings. I’m not simply reiterating that politics is the mindkiller; I think the argument carries even if an ideal reasoner would agree that the Foobar Act really would be a good thing if passed. And I’m not saying that one should never participate in any collective project in which one person’s effort is unlikely to make a critical difference; there are certainly reasons why someone might want to do that (e.g., timeless-decision-theoretic “I should decide as if deciding for all instances of this decision algorithm” reasoning might apply, or a small probability of a large impact might be worth it in expectation, &c.). But notice these possible reasons for participating in a collective project are not most people’s actual reasons for having opinions about things they’re not going to affect.
Of course, integrating this insight into my thoughts will make it even harder than it already is for me to communicate with people who haven’t spent the last five years obsessing about the nature of rationality … but the other thing I need to stop doing is expecting to be able to communicate with arbitrary people on arbitrary topics.
For the past five years, I’ve spent a lot of time being really upset and angry and offended and mindkilled that most people in society seem to systematically conflate schooling (enrolling in courses and obeying the commands issued by the designated teacher) and that which I would call education (learning important things by whatever means). I still judge this to be a worthy sentiment—I really would prefer to live in a world with more authenticity, and existing schools still seem really terrible in contrast to what people can do for themselves when they’re really motivated—but it’s only now that I’m starting to see (really see, not just dutifully mouth the words) that my behavior of being outraged all the time wasn’t actually contributing to that goal, that visibly resenting the fact that other people don’t care about the things that I care about is pointless: it doesn’t help them, and it doesn’t help me. Better that I should just learn to lie (as part of the general trend where, surprisingly, I become a better person as I become less principled), or at least, to not wear my heart on my sleeve all the damned time.
I’m really confused; I want to say that I feel as if my brain has wandered into a slightly different state of consciousness that I’m not used to. I thought this rationality stuff was cool and all, and I thought I understood it pretty well—couldn’t I speak fluently about the same things everyone else was talking about?---but suddenly over the past several days, as I’ve tried to apply the intelligence-as-optimization viewpoint to my personal life problems (q.v. the parent and my comment on lying), it starts to feel as if I’m actually starting to sort of get it. I seem to feel reluctant to report this (notice all the hedging words: “seem,” “as if,” “want to say,” &c.), because introspection is unreliable, and verbal self-reports are unreliable, and I seem to have this thing where I feel reluctant to endorse statements that could be construed to imply that I should have higher status, and there have certainly been occasions in the past where I thought I had a life-altering epiphany and I turned out to be mistaken. So maybe you shouldn’t believe me … but that’s just the thing: this entire idea of believing or not-believing natural language propositions can’t be how intelligence actually works, and maybe the reason I feel the need to use all these hedging words is because it’s becoming more salient to me that I really don’t know what’s actually going on when I think; I’m writing these words, but I no longer feel sure what it means to believe them.
I want to say that I ought to be scared about the whole AI existential risk thing, but I’m not—and, come to think of it, as a matter of cause and effect, my being scared won’t actually help except insofar as it motivates me to do something helpful. I’d really rather just not think about it at all anymore. Of course, not-thinking about a risk doesn’t make it go away, but we should make a distinction between not-thinking-about something as a way of denying reality, and not-thinking-about something as a reallocation of cognitive resources: if I spend my own thinking time on fun, safe, selfish ideas, but learn how to make some money, and use some of the money to help fund people who are better at thinking than me to work on the scary confusing world-destroying problems, isn’t that good enough? Isn’t that morally acceptable? Of course, these ideas of “enough” and “morally acceptable” don’t exist in decision theory, either, but I doubt it’s psychologically realistic to function without them, and I don’t think I actually want to.