I think this is strawmanning the appeal to consequences argument, by mixing up private beliefs and public statements, and by ending with a pretty superficial agreement on rule-consequentialism without exploring how to pick which rules (among one for improving private beliefs, one for sharing relevant true information and one for suppressing harmful information) applies.
The participants never actually attempt to resolve the truth about puppies saved per dollar, calling the whole thing into question—both whether their agreement is real and whether it’s the right thing. Many of these discussions should include a recitation of [ https://wiki.lesswrong.com/wiki/Litany_of_Tarski ], and a direct exploration whether it’s beliefs (private) or publication (impacting presumed-less-rational agents) that is at issue.
In any case, appeals to consequences at the meta/rule level still HAS to be grounded in appeals to consequences at the actual object consequence level. A rule that has so many exceptions that it’s mostly wrong is actively harmful. My objection to the objection to “appeal to consequences” is that the REAL objection is to bad epistemology of consequence prediction, not to the desire to predict consequences.
In a completely separate direction, consequences of speech acts in public/group settings are WAY more complicated than epistemic consequences of a truth-seeking discussion among a small group of fairly close rationalist-inclined friends. Both different rules/defaults/norms apply, and different calculations of consequences of specific speech actions are made.
All that said, I prefer norms that lean toward truth-telling and truth-seeking, and it makes me suspicious when that is at odds with consequences of speech acts. I have a higher standard of evidence for my consequence predictions for lying than I have for withholding relevant facts than I have for truth-telling.
I think this is strawmanning the appeal to consequences argument, by mixing up private beliefs and public statements, and by ending with a pretty superficial agreement on rule-consequentialism without exploring how to pick which rules (among one for improving private beliefs, one for sharing relevant true information and one for suppressing harmful information) applies.
The participants never actually attempt to resolve the truth about puppies saved per dollar, calling the whole thing into question—both whether their agreement is real and whether it’s the right thing. Many of these discussions should include a recitation of [ https://wiki.lesswrong.com/wiki/Litany_of_Tarski ], and a direct exploration whether it’s beliefs (private) or publication (impacting presumed-less-rational agents) that is at issue.
In any case, appeals to consequences at the meta/rule level still HAS to be grounded in appeals to consequences at the actual object consequence level. A rule that has so many exceptions that it’s mostly wrong is actively harmful. My objection to the objection to “appeal to consequences” is that the REAL objection is to bad epistemology of consequence prediction, not to the desire to predict consequences.
In a completely separate direction, consequences of speech acts in public/group settings are WAY more complicated than epistemic consequences of a truth-seeking discussion among a small group of fairly close rationalist-inclined friends. Both different rules/defaults/norms apply, and different calculations of consequences of specific speech actions are made.
All that said, I prefer norms that lean toward truth-telling and truth-seeking, and it makes me suspicious when that is at odds with consequences of speech acts. I have a higher standard of evidence for my consequence predictions for lying than I have for withholding relevant facts than I have for truth-telling.