Would your views on speaking truth to power change if the truth were 2x as offensive as you currently think it is? 10x? 100x? (If so, are you sure that’s not why you don’t think the truth is more offensive than you currently think it is?) Immaterial souls are stabbed all the time in the sense that their opinions are discredited.
Would your views on speaking truth to power change if the truth were 2x as offensive as you currently think it is? 10x? 100x?
For some multiplier, yes. (I don’t know what the multiplier is.) If potentates would murder me on the spot unless I deny that that they live acting by others’ action, and affirm that they are loved even if they don’t give and are strong independently of a faction, then I will say those things in order to not be murdered on the spot.
I guess I need to clarify something: I tend to talk about this stuff in the language of virtues and principles rather than the language of consequentialism, not because I think the language of virtues and principles is literally true as AI theory, but because humans can’t use consequentialism for this kind of thing. Some part of your brain is performing some computation that, if it works, to the extent that it works, is mirroring Bayesian decision theory. But that doesn’t help the part of you can that talk, that can be reached by the part of me that can talk.
“Speak the truth, even if your voice trembles” isn’t a literal executable decision procedure—if you programmed your AI that way, it might get stabbed. But a culture that has “Speak the truth, even if your voice trembles” as a slogan might—just might be able to do science or better—to get the goddamned right answereven when the local analogue of the Pope doesn’t like it. I falsifiably predict that a culture that has “Use Bayesian decision theory to decide whether or not to speak the truth” as its slogan won’t be able to do science—Platonically, the math has to exist, but letting humans appeal to Platonic math whenever they want is just too convenient of an excuse.
Would your views on speaking truth to power change if the truth were 2x less expensive as you currently think it is? 10x? 100x? I falsifiably predict that your answer is “Yes.” Followup question: have you considered performing an experiment to test whether the consequences of speech are as dire as you currently think? I think I have more data than you! (We probably mostly read the same blogs, but I’ve done field work.)
(If so, are you sure that’s not why you don’t think the truth is more offensive than you currently think it is?)
Great question! No, I’m not sure. But if my current view is less wrong than the mainstream, I expect to do good by talking about it, even if there exists an even better theory that I wouldn’t be brave enough to talk about.
Immaterial souls are stabbed all the time in the sense that their opinions are discredited.
Can you be a little more specific? “Discredited” is a two-place function (discredited to whom).
Would your views on speaking truth to power change if the truth were 2x less expensive as you currently think it is? 10x? 100x?
Maybe not; probably; yes.
Followup question: have you considered performing an experiment to test whether the consequences of speech are as dire as you currently think? I think I have more data than you! (We probably mostly read the same blogs, but I’ve done field work.)
Most of the consequences I’m worried about are bad effects on the discourse. I don’t know what experiment I’d to to figure those out. I agree you have more data than me, but you probably have 2x the personal data instead of 10x the personal data, and most relevant data is about other people because there are more of them. Personal consequences are more amenable to experiment than discourse consequences, but I already have lots of low-risk data here, and high-risk data would carry high risk and not be qualitatively more informative. (Doing an Experiment here doesn’t teach you qualitatively different things here than watching the experiments that the world constantly does.)
Can you be a little more specific? “Discredited” is a two-place function (discredited to whom).
Discredited to intellectual elites, who are not only imperfectly rational, but get their information via people who are imperfectly rational, who in turn etc.
It almost sounds like you’re saying we should tell people they should always speak the truth even though it is not the case that people should always speak the truth, because telling people they should always speak the truth has good consequences. Hm!
I don’t like the “speak the truth even if your voice trembles” formulation. It doesn’t make it clear that the alternative to speaking the truth, instead of lying, is not speaking. It also suggests an ad hominem theory of why people aren’t speaking (fear, presumably of personal consequences) that isn’t always true. To me, this whole thing is about picking battles versus not picking battles rather than about truth versus falsehood. Even though if you pick your battles it means a non-random set of falsehoods remains uncorrected, picking battles is still pro-truth.
If we should judge the Platonic math by how it would be interpreted in practice, then we should also judge “speak the truth even if your voice trembles” by how it would be interpreted in practice. I’m worried the outcome would be people saying “since we talk rationally about the Emperor here, let’s admit that he’s missing one shoe”, regardless of whether the emperor is missing one shoe, is fully dressed, or has no clothes at all. All things equal, being less wrong is good, but sometimes being less wrong means being more confident that you’re not wrong at all, even though you are wrong at all.
(By the way, I think of my position here as having a lower burden of proof than yours, because the underlying issue is not just who is making the right tradeoffs, but whether making different tradeoffs than you is a good reason to give up on a community altogether.)
(This comment is really helpful for me to understand your positions.)
Some part of your brain is performing some computation that, if it works, to the extent that it works, is mirroring Bayesian decision theory. But that doesn’t help the part of you can that talk, that can be reached by the part of me that can talk.
Why not? It seems likely to me that the part of my brain that is doing something like Bayesian decision theory can be trained in certain directions by the part of me that talks/listens (for example by studying history or thinking about certain thought experiments).
I falsifiably predict that a culture that has “Use Bayesian decision theory to decide whether or not to speak the truth” as its slogan won’t be able to do science
I’m not convinced of this. Can you say more about why you think this?
Would your views on speaking truth to power change if the truth were 2x as offensive as you currently think it is? 10x? 100x? (If so, are you sure that’s not why you don’t think the truth is more offensive than you currently think it is?) Immaterial souls are stabbed all the time in the sense that their opinions are discredited.
For some multiplier, yes. (I don’t know what the multiplier is.) If potentates would murder me on the spot unless I deny that that they live acting by others’ action, and affirm that they are loved even if they don’t give and are strong independently of a faction, then I will say those things in order to not be murdered on the spot.
I guess I need to clarify something: I tend to talk about this stuff in the language of virtues and principles rather than the language of consequentialism, not because I think the language of virtues and principles is literally true as AI theory, but because humans can’t use consequentialism for this kind of thing. Some part of your brain is performing some computation that, if it works, to the extent that it works, is mirroring Bayesian decision theory. But that doesn’t help the part of you can that talk, that can be reached by the part of me that can talk.
“Speak the truth, even if your voice trembles” isn’t a literal executable decision procedure—if you programmed your AI that way, it might get stabbed. But a culture that has “Speak the truth, even if your voice trembles” as a slogan might—just might be able to do science or better—to get the goddamned right answer even when the local analogue of the Pope doesn’t like it. I falsifiably predict that a culture that has “Use Bayesian decision theory to decide whether or not to speak the truth” as its slogan won’t be able to do science—Platonically, the math has to exist, but letting humans appeal to Platonic math whenever they want is just too convenient of an excuse.
Would your views on speaking truth to power change if the truth were 2x less expensive as you currently think it is? 10x? 100x? I falsifiably predict that your answer is “Yes.” Followup question: have you considered performing an experiment to test whether the consequences of speech are as dire as you currently think? I think I have more data than you! (We probably mostly read the same blogs, but I’ve done field work.)
Great question! No, I’m not sure. But if my current view is less wrong than the mainstream, I expect to do good by talking about it, even if there exists an even better theory that I wouldn’t be brave enough to talk about.
Can you be a little more specific? “Discredited” is a two-place function (discredited to whom).
Maybe not; probably; yes.
Most of the consequences I’m worried about are bad effects on the discourse. I don’t know what experiment I’d to to figure those out. I agree you have more data than me, but you probably have 2x the personal data instead of 10x the personal data, and most relevant data is about other people because there are more of them. Personal consequences are more amenable to experiment than discourse consequences, but I already have lots of low-risk data here, and high-risk data would carry high risk and not be qualitatively more informative. (Doing an Experiment here doesn’t teach you qualitatively different things here than watching the experiments that the world constantly does.)
Discredited to intellectual elites, who are not only imperfectly rational, but get their information via people who are imperfectly rational, who in turn etc.
It almost sounds like you’re saying we should tell people they should always speak the truth even though it is not the case that people should always speak the truth, because telling people they should always speak the truth has good consequences. Hm!
I don’t like the “speak the truth even if your voice trembles” formulation. It doesn’t make it clear that the alternative to speaking the truth, instead of lying, is not speaking. It also suggests an ad hominem theory of why people aren’t speaking (fear, presumably of personal consequences) that isn’t always true. To me, this whole thing is about picking battles versus not picking battles rather than about truth versus falsehood. Even though if you pick your battles it means a non-random set of falsehoods remains uncorrected, picking battles is still pro-truth.
If we should judge the Platonic math by how it would be interpreted in practice, then we should also judge “speak the truth even if your voice trembles” by how it would be interpreted in practice. I’m worried the outcome would be people saying “since we talk rationally about the Emperor here, let’s admit that he’s missing one shoe”, regardless of whether the emperor is missing one shoe, is fully dressed, or has no clothes at all. All things equal, being less wrong is good, but sometimes being less wrong means being more confident that you’re not wrong at all, even though you are wrong at all.
(By the way, I think of my position here as having a lower burden of proof than yours, because the underlying issue is not just who is making the right tradeoffs, but whether making different tradeoffs than you is a good reason to give up on a community altogether.)
(This comment is really helpful for me to understand your positions.)
Why not? It seems likely to me that the part of my brain that is doing something like Bayesian decision theory can be trained in certain directions by the part of me that talks/listens (for example by studying history or thinking about certain thought experiments).
I’m not convinced of this. Can you say more about why you think this?