MIRI continues to be in good hands!
Kawoomba
I’m not sure LW is a good entry point for people who are turned away by a few technical terms. Responding to unfamiliar scientific concepts with an immediate surge of curiosity is probably a trait I share with the majority of LW’ers. While it’s not strictly a prequisite for learning rationality, it certainly is for starting in medias res.
The current approach is a good selector for dividing the chaff (well educated because that’s what was expected, but no true intellectual curiosity) from the wheat (whom Deleuze would call thinkers-qua-thinkers).
HPMOR instead, maybe?
That’s a good argument if you were to construct the world from first principles. You wouldn’t get the current world order, certainly. But just as arguments against, say, nation-states, or multi-national corporations, or what have you, do little do dissuade believers, the same applies to let-the-natural-order-of-things-proceed advocates. Inertia is what it’s all about. The normative power of the present state, if you will. Never mind that “natural” includes antibiotics, but not gene modification.
This may seem self-evident, but what I’m pointing out is that by saying “consider this world: would you still think the same way in that world?” you’d be skipping the actual step of difficulty: overcoming said inertia, leaving the cozy home of our local minimum.
Disclosing one’s sexual orientation won’t be (mis)construed as a status grab in the same way as disclosing one’s (real or imagined) intellectual superiority. Perceived arguments from authority must be handled with supreme care, otherwise they invariably set the stage for a primate hierarchy contest. Minute details in phrasing can make all the difference: “I could engage with people much smarter than you, yet I choose to help you, since you probably need my help and my advice” versus “I made the following experiences, hopefully someone [impersonal, not triggering status comparisons] can benefit from them”. sigh, hoo-mans … I could laugh at them all day if I wasn’t one of them.
I’m happy to read your posts, but then I may be less picky about my cognitive diet than others. I mean, the alternative would be watching Hell’s Kitchen. You do beat Gordon Ramsay on the relevant metrics, by a large amount.
Then again, maybe I’m just a bit jealous of your idealism.
I dislike the trend to cuddlify everything, to make approving noises no matter what, then framing criticisms as merely some avenue for potential further advances, or somesuch.
On the one hand, I do recognize that works better for the social animals that we are. On the other hand, aren’t we (mostly) adults here, do we really need our hand held constantly? It’s similar to the constant stream of “I LOVE YOU SO MUCH” in everday interactions, it’s a race to the bottom in terms of deteriorating signal/noise ratios. How are we supposed to convey actual approval, shout it from the rooftops? Until that is the new de facto standard of neutral acknowledgment?
A Fisherian runaway, in which a simple truth is disregarded: When “You did a really good job with that, it was very well said, and I thank you for your interest” is a mandatory preamble to most any feedback, it loses all informational content. A neutral element of speech. I do wish for a reset towards more sensible (= information-driven) communication. Less social-affirmation posturing.
But, given the sensitive nature of topics here, this may be the wrong avenue to effect such a reset, invoking Crocker’s Rules or no. Actually skipping the empty phraseology should be one of the later biases to overcome.
I don’t think that it carves reality at its joints to call that “mathematical ability.”
… and we’re down to definitional quibbles, which are rarely worth the effort, other than simply stating “I define x as such and such, in contrast to your defining x as such as such”. Reality has no intrinsic, objective dictionary with an entry for “mathematical ability”, so such discussions can mostly be solved by using some derivative terms x1 and x2, instead of an overloaded concept x.
Of course, the discussion often reduces to who has the primacy on the original wording of x, which is why I’d suggest that neither get it / taboo x.
I agree that a more complex, nuanced framework would better correspond to different aspects of cognitive processing, but then that’s the case for most subject matters. Bonus for not being as generally demotivating as “you lack that general quality called math ability”, malus points because of a complexity penalty.
Teaching happiness can be—and often is—at odds with teaching epistemic rationality.
I amended the grandparent. Suppose for the sake of argument you agreed with my estimate of this being the proverbial “last, best hope”. Then giving away the one potentially game-changing advantage to barter for a globally insignificant “victory” would be the epitome of an overly greedy algorithm. Losing sight of the actual goal because an authority figure told you so, in a way not thinking for yourself beyond the task as stated.
Making that point sounds, on reflection, like exactly the type of thing I’d expect Eliezer to do. Do what I mean, not as I say.
as opposed to, say, Voldemort killing himself immediately and fighting me within the horcrux system
Ocupado. Assuming it was not, even Voldemort would have some sort of reaction latency to such an outside context problem. Assuming he reacted instantly, sounds like better chances than buying a few days of unconsciousness still.
Personally, I feel that case 1 (“doesn’t work at all”) is much more probable
I’ve come to the opposite conclusion. Should we drag out quotes to compare evidence? Is your estimate predicated on just one or two strong arguments, and if so could I bother you to state them? The most probability mass to my estimate is contributed by Voldemort’s former reluctance to test the horcrux system and his prior blind spots as a rationalist when designing the system, and the oft-reinforced notion of Harry actually being a version of Tom Riddle, indistinguishable to a ‘powerful’ magical artifact (the Map), acting as an adult as an 11-years-old, “Riddles and Anwers”, the FF.net title, etc.
Speaking up prolongs Harry’s life until Voldemort does an experimental test.
The actual challenge may be to notice that the challenge isn’t well-posed, that the binary variable to be optimized (“live, if only a little longer”) is but a greedy solution probably suboptimal to reaching the actual goal. Transcend the teacher’s challenge, solve the actual problem, you know?
Speaking up gives up an easy win
Kind of important. Winning the test, losing the war.
3) Horcrux hijacking works, and there’s no workaround. It doesn’t matter if Harry speaks up or not.
I disagree, it matters: Voldemort goes back to the mirror, freezes Harry in time. Keeps him unconscious through his death eaters. He outclasses everyone else who’s left by orders of magnitude higher than he does Harry, from what we’ve seen. There are plenty of ways to simply cryonically freeze Harry then keep him on Death Eater guard until he made sure he closed the loopholes. Consider that he only learned he could test the system without danger to himself by using others as a proxy “test units” a few hours prior to current events.
PS: There’s, incidentally, as zen-like beauty to the solution: In order to survive, all you need to do is die.
“No action” is an action, same as any other (for a grokkable reference, see consequentialists and the Trolley experiment). Also, obviously it wouldn’t be “no action” it would be selling Voldemort the idea that there’s nothing left, maybe revealing the secret deemed most insignificant and then begging for that to apply to both parents.
1) (Harry tells Voldemort his death could hijack the horcrux network) doesn’t seem unlikely at all. Both hints from within the story (the Marauder map) and on the meta level (“Riddles and Answers”) suggest an unprecedent congruence of identity, at least in the sense of magical artifacts (the map) being unable to tell the difference.
I did not post it since strictly speaking Harry should keep quiet about it.Losing the challenge of not dying (learned to lose), but increasing his chances of winning the war. Immediately even: Since the new horcrux system enables ghost travel, Harry could just try and overwrite / take possession of Voldemort body. Either it works and he wins, or it doesn’t and the magic resonance kills … well, kills only Voldemort, since Harry at that point would be the undead spirit.
That solution occurred to me as I was reading the challenge, and I was puzzled that on my (admittedly cursory) reading of a bunch of solutions, I did not find any exactly resembling it. Either the approach is deeply flawed and I don’t see it, or everyone else is taking this as literary as I did and holding off on proposing it (since it may not be precisely the teacher’s password as worded in the challenge), or something else.
Skimming over (part of) the proposed solutions on FF.net has thoroughly destroyed any sense of kinship with the larger HPMoR readership. Darn, it was such a fuzzily warm illusion. In concordance with Yvain’s latest blog post, there may be few if any general shortcuts to raising the sanity waterline.
Harry’s commitment is quite weaksauce, and it was surprising that he wasn’t called on it:
I sshall help you obtain the Sstone (...) sshall not do anything I think will annoy you to no good end. Sshall call no help if I expect them to be killed by you or for hosstagess to die.
So he’s free to call help as long as he expects to win the ensuing encounter. After which he could hand the Stone to a subdued Quirrell for just a moment, technically fulfilling that clause as well. Also, the “to no good end” qualifier? “Winning against Voldemort” certainly would count as a good end, cancelling that part as well.
Well, depends on how much you discount the expected utility of cryonics due to Pascal’s Wager concerns. The variance of the payoff for freezing tissue certainly is much smaller, and freezing tissue really isn’t a big deal from a technical or even societal point of view, as evidenced by, say, female egg freezing for fertility preservation.
The (?) proves you right about the philosophy part.
Seems like there’s some feminists or some 6′5 men with a superiority complex around.
Well, I am 6′7, without a superiority complex of course. That’s not why I downvoted you, though, and since you asked for an explanation:
I’m reading the comments and looking for some new ammo for my next fight in the gender wars.
That’s not the kind of approach (arguments as soldiers) we’re looking for in a rationality forum. One of the prequisites is a willingness to change your mind, which seems to be setting the bar too high for some people.
I’d call it an a-rationality quote, in the sense that it’s just an observation; one backed up by evidence but with no immediate relevancy to the topic of rationality.
On second thought, it does show a kind of bias, namely the “compete-for-limited-resources” evolutionary imperative which introduced the “bias” of treating most social phenomena as zero-sum games. Bias in quotes because there is no correct baseline to compare against, tendency would probably be a better term.
Strong statement from Bill Gates on machine superintelligence as an x-risk, on today’s Reddit AMA:
I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.
- Jan 28, 2015, 8:47 PM; 3 points) 's comment on Bill Gates: problem of strong AI with conflicting goals “very worthy of study and time” by (
I too have the impression that for the most part the scope of the “effective” in EA refers to ”… within the Overton window”. There’s the occasional stray ‘radical solution’, but usually not much beyond “let’s judge which of these existing charities (all of which are perfectly societally acceptable) are the most effective”.
Now there are two broad categories to explain that:
a) Effective altruists want immediate or at least intermediate results / being associated with “crazy” initiatives could mean collateral damage to their efforts / changing the Overton window to accommodate actually effective methods would be too daunting a task / “let’s be realistic”, etc.
b) Effective altruists don’t want to upset their own System 1 sensibilities, their altruistic efforts would lose some of the fuzzies driving them if they needed to justify “mass sterilisation of third world countries” to themselves.
Solutions to optimization problems tend to set to extreme values all those variables which aren’t explicitly constrained. The question then is which ideals we’re willing to sacrifice in order to achieve our primary goals.
As an example, would we really rather have people decide just how many children they want to to create, only to see those children perish in the resulting population explosion? Will we influence that decisions only based on “provide better education, then hope for the best”, in effect preferring starving families with the choice to procreate whenever to non-starving families without said choice?
I do believe it would be disastrous for EA as a movement to be associated with ideas too far outside the Overton window, and that is a tragedy, because it massively restricts EA’s maximum effectiveness.