Much less significantly, a culture in which you are obliged to either raise your children or see them put through foster care is also a much less fun culture to live in.
Somewhat regardless of our private feelings on the matter, a tip: Forget OKCupid, do you not see how earnestly stating such beliefs in public gives your handle a reputation you might not mind in general, yet greatly want to avoid at some future point of your LW blogging—such as when wanting to sway someone in an area concerning ethical values and empathy?
And it’s not clear that adding a person to the universe (as things stand today) will, on average, increase the amount of fun had down the line; this is why you’re not obliged to be trying to have as many children as possible at all times.
I’d hope that LessWrong is a community in which having in the past been willing to support controversial opinions would increase your repute, not decrease it.
Giving respect to controversy for the sake of controversy is just inviting more trolling and flamewars.
I have respect for true ideas, whether they are outmoded or fashionable or before their time. I don’t care whether an idea is original or creative or daring or shocking or boring, I want to know if it’s sound.
The fact that you seem to expect increased respect because of controversial opinions makes me think that you when you wrote about your support for infanticide, you were motivated more by the fact that many people disagreed with you, than by the fact that it’s actually a good idea that would make the world a better place.
Libertarians are a contentious lot, in many cases delighting in staking ground and refusing to move on the farthest frontiers of applying the principles of noncoercion and nonaggression; resolutely finding the most outrageous and obnoxious position you could take that is theoretically compatible with libertarianism and challenging anyone to disagree. If they are not of the movement, then you can enjoy having shocked them with your purism and dedication to principle; if they are of the movement, you can gleefully read them out of it.
...whereas my positions on Newcomb’s paradox… are not
two-box
Let’s not go off on that tangent in here, but two-boxing is hardly uncontroversial on LW: lots of one-boxers here, including Yudkowsky. I’m one too. Also, didn’t you say you “want to win”?
I’d hope that LessWrong is a community in which having in the past been willing to support controversial opinions would increase your repute, not decrease it. If we always worry about our reputation when having discussions about possibly controversial topics, we’re not going to have much discussion at all.
We don’t mind. You aren’t actually going to kill babies and you aren’t able to make it legal either (ie. “mostly harmless”). Just don’t count too much on your anonymity! Assume that everything you say on the internet will come back to haunt you in the future—when trying to get a job, for example. Or when you are unjustly accused of murder in Italy.
EDIT: Pardon me, when I say “we” don’t mind I am speaking for myself and guessing at an overall consensus. I suspect there are one or two who do mind—but that’s ok and I consider it their problem.
Really? What’s your estimate of the probability that Bakkot’s inclusion in a CEV-calculating-algorythm’s target mind-space will make it more likely for the resulting CEV to tolerate infanticide?
Call P1 the probability that Bakkot’s inclusion in a CEV-calculating-algorythm’s target mind-space will make it more likely for the resulting CEV to tolerate infanticide. Call P2 the probability that Bakkot isn’t capable of making infanticide legal, disregarding P1.
You seem to be saying P1 approximately equals 0 (which is what I understand “negligible” to mean), and P2 approximately equals 1, and that P2-P1 does not approximately equal 1.
I don’t see how all three of those can be true at the same time.
Edit: if the downvotes are meant to indicate I’m wrong, I’d love a correction as well. OTOH, if they’re just meant to indicate the desire for fewer comments like these, that’s fine.
Multiheaded said “That only has a certainty approaching 1 if we all went and forgot about CEV and related prospects.” I understand “that” to refer to “bakkot isn’t able to make make infanticide legal”. I conclude that the probability that Bakkot isn’t capable of making infanticide legal, if we forget about CEV and related prospects, is approximately 1. P2 is the probability that Bakkot isn’t capable of making infanticide legal, if we disregard the probability that Bakkot’s inclusion in a CEV-calculating-algorythm’s target mind-space will make it more likely for the resulting CEV to tolerate infanticide. I conclude that P2 is approximately 1.
I’d hope that LessWrong is a community in which having in the past been willing to support controversial opinions would increase your repute, not decrease it.
Not always. For any random Lesswrongian with a contrarian position you’re nearly sure to find a Lesswrongian with a meta-contrarian one.
Also, notice that your signaling now is so bad from a baseline human standpoint that people’s sociopath/Wrong Wiring alarms are going off, or would go off if there’s more of such signaling. I think that my alarm’s just kinda sensitive* because I had it triggered by and calibrated on myself many times.
*(Alas, this could also be evidence that along the line I subconsciously tweaked this bit of my software to get more excuses for playing inquisitor with strangers)
FWIW, I disagree with you but you don’t set off my “sociopath alarm”. I think you and Multiheaded may not be able to have a normal conversation with each other, but each of you seems to get along fine with the rest of LW.
I think you and Multiheaded may not be able to have a normal conversation with each other
If it helps, I can pretty much envision what’s needed for such a conversation, and understand full well that the reasons it’s not actually happening are all in myself and not in Bakkot. But I don’t have the motivation to modify myself that specific way. On the other hand, it might come along naturally if I just improve in all areas of communication.
If it helps, my opinion of you has been raised by this thread, rather than lowered. I think very few LWians actually think less of you for this discussion, but that could just be me projecting typical mind fallacy.
I think very few LWians actually think less of you for this discussion
That’s lumping a whole lot of things together. I’d gladly hire Bakkot if I was running pretty much any kind of IT business. I’d enjoy some kinds of debate with him. I’d be interested in playing an online game with him. I probably wouldn’t share a beer. I definitely would participate in a smear campaign if he was running for public office.
Do you mean that it’s pretty certain that I’m not obliged to be trying to have as many children as possible at all times?
Or that it’s pretty certain that the fact that it’s not clear that adding a person to the universe (as things stand today) will, on average, increase the amount of fun had down the line is why I’m not obliged to be trying to have as many children as possible at all times?
Or both?
Also: how important is it to you to manage your handle’s reputation in such a way as to maximize your ability to sway someone on LW in areas concerning ethical values and empathy?
Also: how important is it to you to manage your handle’s reputation in such a way as to maximize your ability to sway someone on LW in areas concerning ethical values and empathy?
Unimportant, because I’m poor at persuading the type of people who care about their status on LW anyway, and am only at all likely to make an impact on the type of person who, like me, cares little/sporadically about their signaling here.
Somewhat regardless of our private feelings on the matter, a tip: Forget OKCupid, do you not see how earnestly stating such beliefs in public gives your handle a reputation you might not mind in general, yet greatly want to avoid at some future point of your LW blogging—such as when wanting to sway someone in an area concerning ethical values and empathy?
Now that’s pretty certain.
Giving respect to controversy for the sake of controversy is just inviting more trolling and flamewars.
I have respect for true ideas, whether they are outmoded or fashionable or before their time. I don’t care whether an idea is original or creative or daring or shocking or boring, I want to know if it’s sound.
The fact that you seem to expect increased respect because of controversial opinions makes me think that you when you wrote about your support for infanticide, you were motivated more by the fact that many people disagreed with you, than by the fact that it’s actually a good idea that would make the world a better place.
You remind me of Hanson (well, Doherty actually) on Libertarian Purity Duels
Let’s not go off on that tangent in here, but two-boxing is hardly uncontroversial on LW: lots of one-boxers here, including Yudkowsky. I’m one too. Also, didn’t you say you “want to win”?
We don’t mind. You aren’t actually going to kill babies and you aren’t able to make it legal either (ie. “mostly harmless”). Just don’t count too much on your anonymity! Assume that everything you say on the internet will come back to haunt you in the future—when trying to get a job, for example. Or when you are unjustly accused of murder in Italy.
EDIT: Pardon me, when I say “we” don’t mind I am speaking for myself and guessing at an overall consensus. I suspect there are one or two who do mind—but that’s ok and I consider it their problem.
That only has a certainty approaching 1 if we all went and forgot about CEV and related prospects.
Really? What’s your estimate of the probability that Bakkot’s inclusion in a CEV-calculating-algorythm’s target mind-space will make it more likely for the resulting CEV to tolerate infanticide?
Pretty negligible, but still orders of magnitude above Bakkot just altering society to tolerate infanticide on his own.
I would tend to agree for what it’s worth.
I think I’m not understanding you.
Call P1 the probability that Bakkot’s inclusion in a CEV-calculating-algorythm’s target mind-space will make it more likely for the resulting CEV to tolerate infanticide. Call P2 the probability that Bakkot isn’t capable of making infanticide legal, disregarding P1.
You seem to be saying P1 approximately equals 0 (which is what I understand “negligible” to mean), and P2 approximately equals 1, and that P2-P1 does not approximately equal 1.
I don’t see how all three of those can be true at the same time.
Edit: if the downvotes are meant to indicate I’m wrong, I’d love a correction as well. OTOH, if they’re just meant to indicate the desire for fewer comments like these, that’s fine.
Where do you get “P2 approximately equals 1”?
Multiheaded said “That only has a certainty approaching 1 if we all went and forgot about CEV and related prospects.”
I understand “that” to refer to “bakkot isn’t able to make make infanticide legal”.
I conclude that the probability that Bakkot isn’t capable of making infanticide legal, if we forget about CEV and related prospects, is approximately 1.
P2 is the probability that Bakkot isn’t capable of making infanticide legal, if we disregard the probability that Bakkot’s inclusion in a CEV-calculating-algorythm’s target mind-space will make it more likely for the resulting CEV to tolerate infanticide.
I conclude that P2 is approximately 1.
Not always. For any random Lesswrongian with a contrarian position you’re nearly sure to find a Lesswrongian with a meta-contrarian one.
Also, notice that your signaling now is so bad from a baseline human standpoint that people’s sociopath/Wrong Wiring alarms are going off, or would go off if there’s more of such signaling. I think that my alarm’s just kinda sensitive* because I had it triggered by and calibrated on myself many times.
*(Alas, this could also be evidence that along the line I subconsciously tweaked this bit of my software to get more excuses for playing inquisitor with strangers)
FWIW, I disagree with you but you don’t set off my “sociopath alarm”. I think you and Multiheaded may not be able to have a normal conversation with each other, but each of you seems to get along fine with the rest of LW.
If it helps, I can pretty much envision what’s needed for such a conversation, and understand full well that the reasons it’s not actually happening are all in myself and not in Bakkot. But I don’t have the motivation to modify myself that specific way. On the other hand, it might come along naturally if I just improve in all areas of communication.
Heck, I might be speaking in Runglish. Bed tiem.
I’m curious: did you?
If it helps, my opinion of you has been raised by this thread, rather than lowered. I think very few LWians actually think less of you for this discussion, but that could just be me projecting typical mind fallacy.
That’s lumping a whole lot of things together. I’d gladly hire Bakkot if I was running pretty much any kind of IT business. I’d enjoy some kinds of debate with him. I’d be interested in playing an online game with him. I probably wouldn’t share a beer. I definitely would participate in a smear campaign if he was running for public office.
Do you mean that it’s pretty certain that I’m not obliged to be trying to have as many children as possible at all times?
Or that it’s pretty certain that the fact that it’s not clear that adding a person to the universe (as things stand today) will, on average, increase the amount of fun had down the line is why I’m not obliged to be trying to have as many children as possible at all times?
Or both?
Also: how important is it to you to manage your handle’s reputation in such a way as to maximize your ability to sway someone on LW in areas concerning ethical values and empathy?
Hmm. Ehhh? …Feels like both.
Unimportant, because I’m poor at persuading the type of people who care about their status on LW anyway, and am only at all likely to make an impact on the type of person who, like me, cares little/sporadically about their signaling here.
OK, thanks for clarifying.