Sunny from QAD
Yeah. I should have made it clear that this post is prescribing a way to evaluate incoming arguments, rather than describing how outgoing arguments will be received by your audience.
Arguments in parallel vs arguments in series
Alternate framing: if you already know that criticisms coming from one’s outgroup are usually received poorly, then the fact that they are received better when coming from the ingroup is a hidden “success mode” that perhaps people could use to make criticisms go down easier somehow.
Idea: “Ugh-field trades”, where people trade away their obligations that they’ve developed ugh-fields for in exchange for other people’s obligations. Both people get fresh non-ugh-fielded tasks. Works only in cases where the task can be done by somebody else, which won’t be every time but might be often enough for this to work.
or 10^(+/- 35) if you’re weird
Excuse you, you mean 6^(+/- 35) !
This is a nice story, and nicely captures the internal dissonance I feel about cooperating with people who disagree with me about my “pet issue”, though like many good stories it’s a little simpler and more extreme than what I actually feel.
This could be a great seed for a short story. The protagonist can supposedly see the future but actually they’re just really really good at seeing the present and making wise bets.
May I see it too?
Asking because the post above advised me to purchase cheap chances at huge upsides and this seems like one of those ^^
This is a lovely post and it really resonated with me. I’ve yet to really orient myself in the EA world, but “fix the normalization of child abuse” is something I have in my mind as a potential cause area. Really happy to hear you’ve gotten out, even if the permanent damage from sleep deprivation is still sad.
I just caught myself committing a bucket error.
I’m currently working on a text document full of equations that use variables with extremely long names. I’m in the process of simplifying it by renaming the variables. For complicated reasons, I have to do this by hand.
Just now, I noticed that there’s a series of variables O1-O16, and another series of variables F17-F25. For technical reasons relating to the work I’m doing, I’m very confident that the name switch is arbitrary and that I can safely rename the F’s to O’s without changing the meaning of the equations.
But I’m doing this by hand. If I’m wrong, I will potentially was a lot of work by (1) making this change (2) making a bunch of other changes (3) realizing I was wrong (4) undoing all the other changes (5) undoing this change (6) re-doing all the changes that came after it.
And for a moment, this spurred me to become less confident about the arbitrariness of the naming convention!
The correct thought would have been “I’m quite confident about this, but seeing as the stakes are high if I’m wrong and I can always do this later, it’s still not worth it to make the changes now.”
The problem here was that I was conflating “X is very likely true” with “I must do the thing I would do if X was certain”. I knew instinctively that making the changes now was a bad idea, and then I incorrectly reasoned that it was because it was likely to go wrong. It’s actually unlikely to go wrong, it’s just that if it does go wrong, it’s a huge inconvenience.
Whoops.
It’s funny that this came up on LessWrong around this time, as I’ve just recently been thinking about how to get vim-like behavior out of arbitrary text boxes. Except I also have the additional problem that I’m somewhat unsatisfied with vim. I’ve been trying to put together my own editor with an “API first” mentality, so that I might be able to, I don’t know, eventually produce some kind of GTK widget that acts like my editor by default. Or something. And then maybe it’ll be easy to make a variant of, say, Thunderbird, in which the email-editing text box is one of those instead of a normal text box.
(If you’re curious, I have two complaints about vim. (1) It’s a little bloated, what with being able to open a terminal inside of the editor and using a presumably baked-in variant of sed to do find-and-replace rather than making you go through a generic “run such-and-such program on such-and-such text selection” command if you want the fancy sed stuff. And (2) its commands are slightly irregular, like how d/foo deletes everything up to what the cursor would land on if you just typed /foo but how dfi deletes everything up to and including what the cursor would land on if you just typed fi.)
Also as a side note, I’m curious what’s actually in the paywalled posts. Surely people didn’t write a bunch of really high-quality content just for an April Fools’ day joke?
I was 100%, completely, unreservedly fooled by this year’s April Fools’ joke. Hilarious XDD
the paucity of scenarios where such a proof would be desired (either due to a lack of importance of such character, or a lack of relevant doubt),
(or by differing opinion of what counts as desirable character!)
To summarize: a binary property P is either discernable (can’t keep your status private) or not (can’t prove your status).
It seems like “agent X puts a particular dollar value on human life” might be ambiguous between “agent X acts as though human lives are worth exactly N dollars each” and “agent X’s internal thoughts explicitly assign a dollar value of N to a human life”. I wonder if that’s causing some confusion surrounding this topic. (I didn’t watch the linked video.)
I haven’t read the post, but I thought I should let you know that several questions have answers that are not spoiler’d.
(The glitch exploits a subpixel misalignment present in about 0.1% of Toyota cars and is extremely difficult to execute even if you know you have a car with the alignment issue right in front of you.)
If you think traffic RNG is bad in the Glitchless category, you should watch someone streaming any% attempts. The current WR has a three-mile damage boost glitch that skips the better part of the commute, saving 13 minutes, and the gal who got it had to grind over 14k attempts for it (about a dozen of them got similar boosts but died on impact).
Right. The 100 arguments the person gives aren’t 100 parallel arguments in favor of them having good reasons to believe evolution is false, for exactly the reason you give. So my reasoning doesn’t stop you from concluding that they have no good reason to disbelieve.
And, they are still 100 arguments in parallel that evolution is false, and my reasoning in the post correctly implies that you can’t read five of them, see that they aren’t good arguments, and conclude that evolution is true. (That conclusion requires a good argument in favor of evolution, not a bad one against it.)