I genuinely, sincerely appreciate that you took the time to make this all explicit, and I think you assumed a more-than-reasonable amount of good faith on my part given how lathered up I was and how hard it is to read tone on the Internet.
I think the space we are talking across is “without checking if this will make my beliefs more accurate.” Accuracy entails “what do I think is true” but also “how confident am I that it’s true”. Persuasion entails that plus “will this persuasion strategy actually make their beliefs more accurate”. In hindsight, I should have communicated why I thought what I proposed would make people’s beliefs about humanity more accurate.
However, the response to my comments made me less confident that the intervention would be effective at making those beliefs more accurate. Plus, given the context, you had little reason to assume that my truth+confidence calculation was well-calibrated.
There’s also the question of whether the expected value of button-pressing exceeds the expected life-worsening, and how confident a potential button-presser is in their answer and the magnitude of the exceeding. I do think that’s a fair challenge to your final thought.
I genuinely, sincerely appreciate that you took the time to make this all explicit, and I think you assumed a more-than-reasonable amount of good faith on my part given how lathered up I was and how hard it is to read tone on the Internet.
I think the space we are talking across is “without checking if this will make my beliefs more accurate.” Accuracy entails “what do I think is true” but also “how confident am I that it’s true”. Persuasion entails that plus “will this persuasion strategy actually make their beliefs more accurate”. In hindsight, I should have communicated why I thought what I proposed would make people’s beliefs about humanity more accurate.
However, the response to my comments made me less confident that the intervention would be effective at making those beliefs more accurate. Plus, given the context, you had little reason to assume that my truth+confidence calculation was well-calibrated.
There’s also the question of whether the expected value of button-pressing exceeds the expected life-worsening, and how confident a potential button-presser is in their answer and the magnitude of the exceeding. I do think that’s a fair challenge to your final thought.
Thanks again.