The sense of doom. I thought the magic-can’t-interact was mostly just the strongest edge of that—e.g. (maybe “i.e.” too) their magic could interact but it would hurt them enough that they don’t try.
BloodyShrimp
That could just be a feature of the True Patronus, which is pretty anti-death and especially anti-indifference-to-other-people’s-lives.
Your point 2 is another thing I’m getting pretty suspicious of. Quirrell has set up a very long plan, and could easily have faked this effect with wandless magic, or an enchantment he could later dispel, all along.
Certainly, and in the actual situation, I would have done worse than he actually did. But, this kind of armchair analysis is extremely enjoyable, and a good way to improve your in situ skills.
Harry made some serious mistakes in chapter 105.
First, the parseltongue honesty-binding could just be Quirrell’s (selective!) wandless magic—I mean, he just forged a note “from yourself” (and why do you even MAKE a self-recognition (“I am a potato”) policy if you just forget all about it once you’re in a life-stakes intrigue) so you need a lot of extra suspicions going forward. But assuming it’s real… there are crucial questions Harry can now profitably ask, with his help conditional on getting immediate Parseltongue answers, along the lines of:
“Why did you set up this elaborate ruse instead of just asking me? Most of what you’re saying right now sounds like something I would’ve probably agreed to if you were open about it, but no, you had to pretend you were dying and kill my friend, so it sure seems like you’re planning nefarious things I’d rather not aid even at the cost of my life and the hostages’ lives… does my CURRENT utility function actually prefer your planned results to the death of me and the hostages?”
(This isn’t the perfect phrasing; for one thing Quirrell doesn’t necessarily know Harry’s utility function to high accuracy, for another Harry might have disagreed to the “open” proposal at weaker dispreference than “this is worse than my death”. But something similar...)
Iff Quirrell is at all “innocent” at this point, he’d want to answer these, and never mind the “my policy is never to reveal that much or people will know I’m guilty later when I actually need to keep mum” stuff; these stakes seem high enough to outweigh any future similar dealings. If he’s guilty, then just die like you’d apparently prefer.
[the only edits I made here after getting responses were to correct my spelling of “Quirrell”, and this note]
This is similar to choosing strict determinism over compatibilism. Which players are the “best” depends on each of those players’ individual efforts during the game. You could extend the idea to the executives too, anyway—which groups of executives acquire better players is largely a function of which have the best executives.
Efforts are only one variable here, and the quote did say “largely a function of”. Those being said, look at how often teams replay each other during a season with a different winner.
Mentioning a similarity to past successful decisions seems like it qualifies as “constructing a more contextually specific argument than ‘you’ll understand when you’re older’”.
While this is on My Side, I still have to protest trying to sneak any side (or particular (group of) utility function(s)) into the idea of “rationality”.
But the map is the map...
Done! The length is fine; the questions are interesting and fun to consider.
EDIT: removed concerns about “cryivf” if. “srzhe” nf ynetrfg obar (znff if. yratgu); gur cryivf nccneragyl vfa’g n “fvatyr obar”.
I would’ve entered! I loved the one-shot PD tournament last summer. In the future, please move popular tournament announcements to Main!
This matches my experience extremely well.
(If there is something called “Chelston’s Fence” (which my searches did not turn up), apologies.)
Chesterton’s Fence isn’t about inertia specifically, but about suspecting that other people had reasons for their past actions even though you currently can’t see any, and finding out those reasons before countering their actions. In Christianity’s case the reasons seem obvious enough (one of the main ones: trust in a line of authority figures going back to antiquity + antiquity’s incompetence at understanding the universe) that Chesterton’s Fence is not very applicable. Willpower and other putative psychological benefits of Christianity are nowhere in the top 100 reasons Taleb was born Christian.
That sounds even more formal than “person” to me, actually.
Edit: how about “someone who acts”?
It sort of fits an (not very common) idiomatic pattern where the compliment is empty-to-sarcastic, but it seems pretty obvious that you didn’t intend it that way, and I can’t actually think of any examples I learned the idiom from.
Unless Quirrell isn’t interested in the stone primarily here, but in tricking Harry into doing something else trying to get the stone.
Then, there was the thing where I would leave plastic syringe caps and bits of paper from wrappers in patients’ beds. This incurred approximately equal wrath to the med errors–in practice, a lot more, because she would catch me doing it around once a shift. I agreed with her on the possible bad consequences. Patients might get bedsores, and that was bad. But there were other problems I hadn’t solved, and they had worse consequences. I had, correctly I think, decided to focus on those first.
When I do this kind of triaging (the example that comes to mind first is learning competitive fighting games), I often (certainly not always) do end up trying to fix some of my lower-priority common mistakes at the same time, but just not caring about them as much. This often seems to make them easier to fix than if I had prioritized them, which seems related to the main point of your post.
This seems a bit more like an Ayn Rand joke than a Less Wrong joke.
It’s all well and good to say you don’t maximize utility for one reason or another, but when somebody tells me that they actually maximize “minimum expected utility”, my first inclination is to tell them that they’ve misplaced their “utility” label.
My first inclination when somebody says they don’t maximize utility is that they’ve misplaced their “utility” label… can you give an example of a (reasonable?) agent which really couldn’t be (reasonably?) reframed to some sort of utility maximizer?
I think these are sufficient evidence that this is the real Dumbledore, not the mirror showing Quirrell what he wants.