I define “optimism” as the tendency to weigh positive outcomes higher than their expected value and negative outcomes lower.
Just flagging that this is not at all how I or Quintin are using the term. We are simply using it to point at an object-level belief that alignment is easy, along with a couple ethical/political values we espouse.
Thanks for pointing this out! I agree that my defintion of “optimism” is not the only way one can use the term. However, from my experience (and like I said, I am basically an optimist), in a highly uncertain situation, the weighing of perceived benefits vs risks heavily influences ones probability estimates. If I want to found a start-up, for example, I convince myself that it will work. I will unconsciously weigh positive evidence higher than negative. I don’t know if this kind of focusing on the positiv outcomes may have influenced your reasoning and your “rosy” view of the future with AGI, but it has happened to me in the past.
“Optimism” certainly isn’t the same as a neutral, balanced view of possibilities. It is an expression of the belief that things will go well despite clear signs of danger (e.g. the often expressed concerns of leading AI safety experts). If you think your view is balanced and neutral, maybe “optimism” is not the best term to use. But then I would have expected much more caveats and expressions of uncertainty in your statements.
Also, even if you think you are evaluating the facts unbiased and neutral, there’s still the risk that others who read your texts will not, for the reaons I mention above.
People here describe themselves as “pessimistic” about a variety of aspects of AI risk on a very regular basis, so this seems like an isolated demand for rigor.
“Optimism” certainly isn’t the same as a neutral, balanced view of possibilities. It is an expression of the belief that things will go well despite clear signs of danger (e.g. the often expressed concerns of leading AI safety experts). If you think your view is balanced and neutral, maybe “optimism” is not the best term to use. But then I would have expected much more caveats and expressions of uncertainty in your statements.
This seems like a weird bait and switch to me, where an object-level argument is only ever allowed to conclude in a neutral middle-ground conclusion. A “neutral, balanced view of possibilities” is absolutely allowed to end on a strong conclusion without a forest of caveats. You switch your reading of “optimism” partway through this paragraph in a way that seems inconsistent with your earlier comment, in such a way that smuggles in the conclusion “any purely factual argument will express a wide range of concerns and uncertainties, or else it is biased”.
From Wikipedia: “Optimism is an attitude reflecting a belief or hope that the outcome of some specific endeavor, or outcomes in general, will be positive, favorable, and desirable.” I think this is close to my definition or at least includes it. It certainly isn’t the same as a neutral view.
“Optimism” certainly isn’t the same as a neutral, balanced view of possibilities. It is an expression of the belief that things will go well despite clear signs of danger (e.g. the often expressed concerns of leading AI safety experts).
I just disagree, I think the term has many “valid” uses, and one is to refer to an object level belief that things will likely turn out pretty well. It doesn’t need to be irrational by definition.
I also think AI safety experts are self selected to be more pessimistic, and that my personal epistemic situation is at least as good as theirs on this issue, so I’m not bothered that I’m more optimistic than the median safety researcher. I also have a fairly good “error theory” for why many people are overly pessimistic, which will be elucidated in upcoming posts.
I think the term has many “valid” uses, and one is to refer to an object level belief that things will likely turn out pretty well. It doesn’t need to be irrational by definition.
Agreed. Like I said, you may have used the term in a way different from my definition. But I think in many cases, the term does reflect an attitude like I defined it. See Wikipedia.
I also think AI safety experts are self selected to be more pessimistic
This may also be true. In any case, I hope that Quintin and you are right and I’m wrong. But that doesn’t make me sleep better.
Just flagging that this is not at all how I or Quintin are using the term. We are simply using it to point at an object-level belief that alignment is easy, along with a couple ethical/political values we espouse.
Thanks for pointing this out! I agree that my defintion of “optimism” is not the only way one can use the term. However, from my experience (and like I said, I am basically an optimist), in a highly uncertain situation, the weighing of perceived benefits vs risks heavily influences ones probability estimates. If I want to found a start-up, for example, I convince myself that it will work. I will unconsciously weigh positive evidence higher than negative. I don’t know if this kind of focusing on the positiv outcomes may have influenced your reasoning and your “rosy” view of the future with AGI, but it has happened to me in the past.
“Optimism” certainly isn’t the same as a neutral, balanced view of possibilities. It is an expression of the belief that things will go well despite clear signs of danger (e.g. the often expressed concerns of leading AI safety experts). If you think your view is balanced and neutral, maybe “optimism” is not the best term to use. But then I would have expected much more caveats and expressions of uncertainty in your statements.
Also, even if you think you are evaluating the facts unbiased and neutral, there’s still the risk that others who read your texts will not, for the reaons I mention above.
People here describe themselves as “pessimistic” about a variety of aspects of AI risk on a very regular basis, so this seems like an isolated demand for rigor.
This seems like a weird bait and switch to me, where an object-level argument is only ever allowed to conclude in a neutral middle-ground conclusion. A “neutral, balanced view of possibilities” is absolutely allowed to end on a strong conclusion without a forest of caveats. You switch your reading of “optimism” partway through this paragraph in a way that seems inconsistent with your earlier comment, in such a way that smuggles in the conclusion “any purely factual argument will express a wide range of concerns and uncertainties, or else it is biased”.
From Wikipedia: “Optimism is an attitude reflecting a belief or hope that the outcome of some specific endeavor, or outcomes in general, will be positive, favorable, and desirable.” I think this is close to my definition or at least includes it. It certainly isn’t the same as a neutral view.
I just disagree, I think the term has many “valid” uses, and one is to refer to an object level belief that things will likely turn out pretty well. It doesn’t need to be irrational by definition.
I also think AI safety experts are self selected to be more pessimistic, and that my personal epistemic situation is at least as good as theirs on this issue, so I’m not bothered that I’m more optimistic than the median safety researcher. I also have a fairly good “error theory” for why many people are overly pessimistic, which will be elucidated in upcoming posts.
Agreed. Like I said, you may have used the term in a way different from my definition. But I think in many cases, the term does reflect an attitude like I defined it. See Wikipedia.
This may also be true. In any case, I hope that Quintin and you are right and I’m wrong. But that doesn’t make me sleep better.