For this reason, I’ve found it’s very important to be careful not to assume that the world is doing sensible things or giving me all the information.
Yeah, this is good advice in general, and it’s definitely what I was doing wrong this time.
For this reason, I’ve found it’s very important to be careful not to assume that the world is doing sensible things or giving me all the information.
Yeah, this is good advice in general, and it’s definitely what I was doing wrong this time.
Well, yes, but what does 2% failure rate per year even mean when it’s presented independent of a number of uses per year? I mean, without knowing what number of average uses were used to calculate “2% failure rate per year”, it seems like somewhat of a misleading statement, as I’m reasonably certain (let’s say at least 90%) that it’s not intended to reflect that condoms become more protective the more chances you have to use them.
I feel like I’m missing something basic here that would let me see why it’s a useful piece of information on its own.
That’s what I assumed as well, that it was 2% per incident, but I’m having a little trouble parsing those differently:
How is 2% per incident different than 2% per year? I’d interpret both of those statements as ‘on average, given perfect use, a condom will be ineffective at preventing pregnancy in one use out of fifty’.
Speaking from the point of view of someone who wasn’t previously completely sold on cryonics, this is a very thought provoking read.
As a brief tangent, I’m a little dismayed at the number of comments on that article that basically boiled down to ‘it was too long to read’.
I wouldn’t one-box at 1:1.01 odds; the rule I was working off was: “Precommit to one-boxing when box B is stated to contain at least as much money as box A,” and I was about to launch into this big justification on how even if Omega was observed to have 99+% accuracy, rather than being a perfect predictor, it’ll fail at predicting a complicated theory before it fails at predicting a simple one...
...and that’s when I realized that “Precommit to one-boxing when box B is stated to contain more money than box A,” is just as simple a rule that lets me two-box at 1:1 and one-box when it will earn me more.
TL;DR—your point is well taken.
You might as well precommit to one-box at 1:1 odds anyway. If Omega has ever been observed to make an error, it’s to your advantage to be extremely easy to model in case the problem ever comes up again. On the other hand, if Omega is truly omniscient… well, you aren’t getting more than $1,000 anyway, and Omega knows where to put it.
I agree with you; the context from earlier in the strip was about reading a study with evidence pointing to T-rexes being a timid scavenger, and then getting transported back in time and seeing a T-rex acting timid.
The secret is to make wanting the truth your entire identity, right. If your persona is completely stripped down to just “All I care about is the facts”, then the steps disappear, the obstacles are gone. Tyranosaurus was a scavenger? Okay! And then you walk right up to it without hesitation. The evidence says the killer was someone else? Okay, see you later sir, sorry for the inconvenience, wanna go bowling later now that we’re on a first name basis? And so on. Just you and a straight path to the truth. That is how you become perfect.
Yeah, that’s a good idea. I was stuck in the idea of a set curriculum, but weaving it in wherever possible will probably help it stick better.
Pray tell. Or just tell, no praying required, that would be telling. Just prying. Required, I mean.
It really boils down to the convergence of a few factors; he’s already learning a higher grade level than he’d be placed in by his age, he suffers from some hyperactivity issues, and, quite frankly, my wife and I think we can do a better job than the public system. Or at least my wife can; I’m not convinced of my abilities at a teacher yet.
Just ingrain the rationality training as an aspect of the way you interact with him, I go for the Socratic Method. Don’t set apart “rationality training” time (or are you planning to be irrational unless rationality is scheduled?!). Helping your kid develop mental models of others is my favorite.
Obviously I’m not planning to be irrational at any given moment, but I was originally stuck in the mindset of curriculum since that’s what we’ve been going with for math, reading, and science. This is probably a better idea, though.
My wife and I have decided we’re going to homeschool our son, almost five, for various reasons. What age do you think it would be appropriate to start rationality training, and how would you go about it? Are there any particularly kid-friendly resources on rationality that anyone can recommend? (The sequences are good for beginners, but they’re well above the level of a five year old).
That sounds a bit like muddling the hypothetical, along the lines of “well, if I don’t let my family be tortured to death, all those strangers dying would destabilize society, which would also cause my loved ones harm”.
That was the sort of lines I was thinking along, yes. Framing the question in that fashion… I’m having some trouble imagining numbers of people large enough. It would have to be something on the order of ‘where x contains a majority of any given sentient species’.
The realization that I could willingly consign billions of people to death and be able to feel like I made the right decision in the morning is… unsettling.
As the saying goes, “if the hill will not come to Skeeve, Skeeve will go to the hill”.
I wish I could upvote you a second time just for this line. But yes, this is pretty much what I meant; I didn’t intend to imply that I wanted my self-image to be accurate and unchanging from what it is now, I’d just prefer it to be accurate.
Is there an amount of human suffering of strangers to avoid which you’d consent to have your wife and child tortured to death?
Initially, my first instinct was to try and find the biggest font I could to say ‘no’. After actually stopping to think about it for a few minutes… I don’t know. It would probably have to be enough suffering to the point where it would destabilize society, but I haven’t come to any conclusions. Yet.
If the implications make you uncomfortable (maybe they aren’t in accordance with facets of your self-image), well, there’s not yet been a human with non-contradictory values so you’re in good company.
Heh, well, I suppose you’ve got a point there, but I’d still like my self-image to be accurate. Though I suppose around here that kind of goes without saying.
1) I find that interacting with other people face-to-face is mentally exhausting for me. A few hours or so of prolonged exposure is not so bad, but more than that and I have to exert noticeable effort to not be snappish and crabby with people.
2) I suffer from an unreasonable need to sit with my back to a wall, or some other solid structure, even within my own home.
I don’t think I was having any trouble distinguishing between “would”, “should”, and “prefer”. Your analysis of my statement is spot on—it’s exactly what I was intending to say.
If morality is (rather simplistically) defined as what we “should” do, I ought to be concerned when what I would do and what I should do doesn’t line up, if I want to be a moral person.
What I mean by ‘immorality’ is that I, on reflection, believe I am willing to break rules that I wouldn’t otherwise if it would benefit my family. Going back to the original switch problem, if it was ten people tied to the siding, and my wife and child tied to the main track, I’d flip the switch and send the train onto the siding.
I don’t know if that’s morally defensible, but it’s still what I’d do.
I find myself thinking mostly around the same lines as you, and so far the best I’ve been able to come up with is “I’m willing to accept a certain amount of immorality when it comes to the welfare of my wife and child”.
I’m not really comfortable with the implications of that, or that I’m not completely confident it’s not still a rationalization.
My own anti-procrastination technique is to tell my wife that I’m going to be working on X project, and that I’ll talk to her about what I’ve been doing when I’m done. After that, I find that all it takes to put myself back on task is a gentle reminder to myself that my options are:
Get some work done
Admit that I didn’t actually get much done
Lie about my progress
My natural aversion to options two and three is usually enough to get me back on task.
Or do both!
And thus, Aliza_Ludshowski was born.
Not yet, but that would seem to be a plausible end-game for Quirrelmort.