I think there’s an important point about locus of control and scope. You can imagine someone who, early in life, decides that their life’s work will be to build a time machine, because the value of doing so is immense (turning an otherwise finite universe into an infinite one, for example). As time goes on, they notice being more and more pessimistic about their prospects of doing that, but have some block against giving up on an emotional level. The stakes are too high for doomerism to be entertained!
But I think they overestimated their locus of control when making their plans, and they should have updated as evidence came in. If they reduced the scope of their ambitions, they might switch from plans that are crazy because they have to condition on time travel being possible to plans that are sane (because they can condition on actual reality). Maybe they just invent flying cars instead of time travel, or whatever.
I see this post as saying: “look, people interested in futurism: if you want to live in reality, this is where the battle line actually is. Fight your battles there, don’t send bombing runs behind miles of anti-air defenses and wonder why you don’t seem to be getting any hits.” Yes, knowing the actual state of the battlefield might make people less interested in fighting in the war, but especially for intellectual wars it doesn’t make sense to lie to maintain morale.
[In particular, lies of the form “alignment is easy!” work both to attract alignment researchers and convince AI developers and their supporters that developing AI is good instead of world-ending, because someone else is handling the alignment bit.]
Aside: Regardless of whether the quoted claim is true, it does not seem like a prototypical lie. My read of your meaning is: “If you [the hypothetical person claiming alignment is easy] were an honest reasoner and worked out the consequences of what you know, you would not believe that alignment is easy; thusly has an inner deception blossomed into an outer deception; thus I call your claim a ‘lie.’”
And under that understanding of what you mean, Vaniver, I think yours is not a wholly inappropriate usage, but rather unconventional. In its unconventionality, I think it implies untruths about the intentions of the claimants. (Namely, that they semi-consciously seek to benefit by spreading a claim they know to be false on some level.) In your shoes, I think I would have just called it an “untruth” or “false claim.”
Edit: I now think you might have been talking about EY’s hypothetical questioners who thought it valuable to purposefully deceive about the problem’s difficulty, and not about the typical present-day person who believes alignment is easy?
Edit: I now think you might have been talking about EY’s hypothetical questioners who thought it valuable to purposefully deceive about the problem’s difficulty, and not about the typical present-day person who believes alignment is easy?
I think there’s an important point about locus of control and scope. You can imagine someone who, early in life, decides that their life’s work will be to build a time machine, because the value of doing so is immense (turning an otherwise finite universe into an infinite one, for example). As time goes on, they notice being more and more pessimistic about their prospects of doing that, but have some block against giving up on an emotional level. The stakes are too high for doomerism to be entertained!
But I think they overestimated their locus of control when making their plans, and they should have updated as evidence came in. If they reduced the scope of their ambitions, they might switch from plans that are crazy because they have to condition on time travel being possible to plans that are sane (because they can condition on actual reality). Maybe they just invent flying cars instead of time travel, or whatever.
I see this post as saying: “look, people interested in futurism: if you want to live in reality, this is where the battle line actually is. Fight your battles there, don’t send bombing runs behind miles of anti-air defenses and wonder why you don’t seem to be getting any hits.” Yes, knowing the actual state of the battlefield might make people less interested in fighting in the war, but especially for intellectual wars it doesn’t make sense to lie to maintain morale.
[In particular, lies of the form “alignment is easy!” work both to attract alignment researchers and convince AI developers and their supporters that developing AI is good instead of world-ending, because someone else is handling the alignment bit.]
Aside: Regardless of whether the quoted claim is true, it does not seem like a prototypical lie. My read of your meaning is: “If you [the hypothetical person claiming alignment is easy] were an honest reasoner and worked out the consequences of what you know, you would not believe that alignment is easy; thusly has an inner deception blossomed into an outer deception; thus I call your claim a ‘lie.’”
And under that understanding of what you mean, Vaniver, I think yours is not a wholly inappropriate usage, but rather unconventional. In its unconventionality, I think it implies untruths about the intentions of the claimants. (Namely, that they semi-consciously seek to benefit by spreading a claim they know to be false on some level.) In your shoes, I think I would have just called it an “untruth” or “false claim.”
Edit: I now think you might have been talking about EY’s hypothetical questioners who thought it valuable to purposefully deceive about the problem’s difficulty, and not about the typical present-day person who believes alignment is easy?
That is what I was responding to.