Yeah… something about this conversation’s annoying. I can’t figure out what it is.
You’re what I call “agreeing to agree”.
(shit, I guess I never published that blog post, or the solution to it. Uh. this comment is a placeholder for me.)
My recommendation is to sidestep taking about propositions that you agree with or disagree with, and generate hypothetical or real situations, looking for the ones where you’re inclined to take different actions. In my experience, that surfaces real disagreement better than talking about what you believe (for a number of reasons).
eg
If you had the option to press a button to cut the number of observant christians in the world by half (they magically become secular atheists) would you press it? What if you have to choose between doubling the number of observant christians or halving the number, with no status quo option?
If could magically cause 10% of the people in the world to have read a single book, what book? (With the idea being that maybe Alex would recommend a religious book, and Ben probably wouldn’t.)
What attitude do each of you have about SBF or FTX? Do you have some joy about his getting jail time? What would each of you do if you were the judge of his case?
Same question but for other FTX employees.
If Ben catches a person outright lying on LessWrong, what should he do?
[possibly unhelpfully personal] Should Ben have done anything different in his interactions with Nonlinear?
(Those are just sugestions. Probably you would need to generate a dozen or so, and tweak and iterate until you find a place where you have different impulses for what to do in that situation.)
If you find an example where you take different actions or have a different attitude, now you’re in a position to start trying to find cruxes.
Personally, I’m interested: does this forgiveness business suggest any importantly different external actions? Or is it mostly about internal stance?
If you had the option to press a button to cut the number of observant christians in the world by half (they magically become secular atheists) would you press it? What if you have to choose between doubling the number of observant christians or halving the number, with no status quo option?
Half, naturally. Best to rip the band-aid off and then seek new things that aren’t fundamentally based in falsehoods and broken epistemologies.
If could magically cause 10% of the people in the world to have read a single book, what book? (With the idea being that maybe Alex would recommend a religious book, and Ben probably wouldn’t.)
10% of the world is way too many people. I’m not sure a lot of them would be capable of comprehending a lot of the material I’ve read, given the distribution of cognitive and reading ability in the world. It would probably have to be something written for most people to read, not like ~anything on LessWrong. Like, if HPMOR could work I’d pick that. If not, perhaps some Dostoyevsky? But, does 10% of the world speak English? I am confused about the hypothetical.
But not the Bible or the Quran or what-have-you, if that’s what you’re thinking.
What attitude do each of you have about SBF or FTX? Do you have some joy about his getting jail time? What would each of you do if you were the judge of his case?
I don’t think ‘joy’ is what I’d feel, but I am motivated by the prospect of him getting jail time. If I could work to counterfactually increase his jail time or cause it to happen at all, I’d be very motivated to spend a chunk of my life doing so. If someone were to devote much of their adult life to making sure SBF went to jail, I’d think that a pretty good way to spend one’s life. The feeling is one of ‘setting things right’. Justice is important and motivating. There’s something healing about horrendous behavior on a mass scale getting punished, rather than people behaving in sickening ways and then getting to simply move forward with their lives with no repercussions or accountability. Something is very broken when people can betray your trust and intentionally hurt you that much for no good reason, and for them to get away with it can really make the hurt person quite twisted and unable to trust other people to be good. And so it is good to set things right.
If Ben catches a person outright lying on LessWrong, what should he do?
If someone verifiably lies, by default I’ll just call it out and have it be a part of their reputation (and bring it up in the future when their reputation is relevant). That’s how most things are dealt with, and in this scene things will likely go pretty badly for them. If they want to apologize and make amends then they can perhaps settle up the costs they imposed on both the people they deceived and the damage they did to social norms, but we have rules about what we moderate (e.g. sockpuppeting), and it isn’t “all unethical behavior”. Most people have lied in their lives (including me), I am not going to personally hold accountable (or rate limit) everyone who has ever behaved badly. In general, someone having behaved badly in their lives does not mean they cannot make good contributions to public discourse. I wouldn’t stop most people involved in major crimes like murder and fraud from commenting on LessWrong about areas of interest / importance, e.g. Caroline Ellison would be totally able to comment on LW about most topics we discuss.
10% of the world is way too many people. I’m not sure a lot of them would be capable of comprehending a lot of the material I’ve read, given the distribution of cognitive and reading ability in the world. It would probably have to be something written for most people to read, not like ~anything on LessWrong. Like, if HPMOR could work I’d pick that. If not, perhaps some Dostoyevsky? But, does 10% of the world speak English? I am confused about the hypothetical.
The idea of this conversational technique is that you can shape the hypothetical to find one where the two of you have strong, clear, differing intuitions.
If you’re like “IDK man, most people won’t even understand most of the books that I think are important, and so most of the problem is figuring out something that ‘works’ at all, not picking the best thing”, you could adjust the hypothetical, accordingly. What about 10% of the global population sampled randomly from people who have above 110 IQ, and if they’re not english speakers they get a translation? Does that version of the hypothetical give you a clearer answer?
Or like (maybe this is a backwards way to frame things but) I would guess[1] that there’s a version of some question like this to which you would answer the sequences, or something similar, since it seems like your take is “[one of] the major bottleneck[s] in the world is making words mean things.” Is there a version that does return the sequences or similar?
FYI, I feel interested in these answers and wonder if Alex disagrees with either the specific actions or something about the spirit of the actions.
For instance, my stereotype of religious prophets, is that they don’t dedicate their life to taking down, or prosecuting a particular criminal. My personal “what would Jesus / Buddha do?” doesn’t return “commit my life to making sure that guy gets jail time.” Is that an “in” towards your actual policy differences (the situations in the world where you would make different tradeoffs)?
Yeah, I should probably write these up. I called this “action-oriented operationalization” (in contrast to prediction-oriented operationalization) and at least part of the credit goes to John Salvatier for developing it.
You’re what I call “agreeing to agree”.
(shit, I guess I never published that blog post, or the solution to it. Uh. this comment is a placeholder for me.)
My recommendation is to sidestep taking about propositions that you agree with or disagree with, and generate hypothetical or real situations, looking for the ones where you’re inclined to take different actions. In my experience, that surfaces real disagreement better than talking about what you believe (for a number of reasons).
eg
If you had the option to press a button to cut the number of observant christians in the world by half (they magically become secular atheists) would you press it? What if you have to choose between doubling the number of observant christians or halving the number, with no status quo option?
If could magically cause 10% of the people in the world to have read a single book, what book? (With the idea being that maybe Alex would recommend a religious book, and Ben probably wouldn’t.)
What attitude do each of you have about SBF or FTX? Do you have some joy about his getting jail time? What would each of you do if you were the judge of his case?
Same question but for other FTX employees.
If Ben catches a person outright lying on LessWrong, what should he do?
[possibly unhelpfully personal] Should Ben have done anything different in his interactions with Nonlinear?
(Those are just sugestions. Probably you would need to generate a dozen or so, and tweak and iterate until you find a place where you have different impulses for what to do in that situation.)
If you find an example where you take different actions or have a different attitude, now you’re in a position to start trying to find cruxes.
Personally, I’m interested: does this forgiveness business suggest any importantly different external actions? Or is it mostly about internal stance?
My answer to a few of Eli’s prompts below.
Half, naturally. Best to rip the band-aid off and then seek new things that aren’t fundamentally based in falsehoods and broken epistemologies.
10% of the world is way too many people. I’m not sure a lot of them would be capable of comprehending a lot of the material I’ve read, given the distribution of cognitive and reading ability in the world. It would probably have to be something written for most people to read, not like ~anything on LessWrong. Like, if HPMOR could work I’d pick that. If not, perhaps some Dostoyevsky? But, does 10% of the world speak English? I am confused about the hypothetical.
But not the Bible or the Quran or what-have-you, if that’s what you’re thinking.
I don’t think ‘joy’ is what I’d feel, but I am motivated by the prospect of him getting jail time. If I could work to counterfactually increase his jail time or cause it to happen at all, I’d be very motivated to spend a chunk of my life doing so. If someone were to devote much of their adult life to making sure SBF went to jail, I’d think that a pretty good way to spend one’s life. The feeling is one of ‘setting things right’. Justice is important and motivating. There’s something healing about horrendous behavior on a mass scale getting punished, rather than people behaving in sickening ways and then getting to simply move forward with their lives with no repercussions or accountability. Something is very broken when people can betray your trust and intentionally hurt you that much for no good reason, and for them to get away with it can really make the hurt person quite twisted and unable to trust other people to be good. And so it is good to set things right.
If someone verifiably lies, by default I’ll just call it out and have it be a part of their reputation (and bring it up in the future when their reputation is relevant). That’s how most things are dealt with, and in this scene things will likely go pretty badly for them. If they want to apologize and make amends then they can perhaps settle up the costs they imposed on both the people they deceived and the damage they did to social norms, but we have rules about what we moderate (e.g. sockpuppeting), and it isn’t “all unethical behavior”. Most people have lied in their lives (including me), I am not going to personally hold accountable (or rate limit) everyone who has ever behaved badly. In general, someone having behaved badly in their lives does not mean they cannot make good contributions to public discourse. I wouldn’t stop most people involved in major crimes like murder and fraud from commenting on LessWrong about areas of interest / importance, e.g. Caroline Ellison would be totally able to comment on LW about most topics we discuss.
The idea of this conversational technique is that you can shape the hypothetical to find one where the two of you have strong, clear, differing intuitions.
If you’re like “IDK man, most people won’t even understand most of the books that I think are important, and so most of the problem is figuring out something that ‘works’ at all, not picking the best thing”, you could adjust the hypothetical, accordingly. What about 10% of the global population sampled randomly from people who have above 110 IQ, and if they’re not english speakers they get a translation? Does that version of the hypothetical give you a clearer answer?
Or like (maybe this is a backwards way to frame things but) I would guess[1] that there’s a version of some question like this to which you would answer the sequences, or something similar, since it seems like your take is “[one of] the major bottleneck[s] in the world is making words mean things.” Is there a version that does return the sequences or similar?
FYI, I feel interested in these answers and wonder if Alex disagrees with either the specific actions or something about the spirit of the actions.
For instance, my stereotype of religious prophets, is that they don’t dedicate their life to taking down, or prosecuting a particular criminal. My personal “what would Jesus / Buddha do?” doesn’t return “commit my life to making sure that guy gets jail time.” Is that an “in” towards your actual policy differences (the situations in the world where you would make different tradeoffs)?
Though obviously, don’t let my guesses dictate your attitudes. Maybe you don’t actually think anything like that!
I like the technique.
Yeah, I should probably write these up. I called this “action-oriented operationalization” (in contrast to prediction-oriented operationalization) and at least part of the credit goes to John Salvatier for developing it.