I’d wondered why no one used a time-turner the moment they knew a troll was loose. Even if Dumbledore had already used up his hours, another professor could’ve used some form of priority magical communication to call for aurors to travel six hours into the past, swiftly prepare to deal with a Hogwarts-attacking troll, and teleport to the site. Then I realized that Quirrell could prevent all attempts to stop the troll using time travel by exploiting the restriction against information traveling back more than six hours, i.e. by waiting until six hours after he wanted the attack to start, traveling back six hours, and initiating the attack.
nebulous
I thought she mostly understood his sentence (though of course she hadn’t known about ELIZA beforehand) and owned a few magical items that could talk to a limited extent.
Augh, right. I’d forgotten that was there.
I get it now. Thanks!
Where are resources for finding an effective, context-appropriate exercise routine?
Career interest: Eventually founding an IT startup, as per recommendation by Carl Shulman. Motivation: Making lots of money to donate to effective charities. Background: My dad is a freelance (Windows) computer assembly and repair guy, and I picked up some troubleshooting and upkeep tricks from that, but nothing impressive. I also took a computer science class where I gained some ability in Java.
A basic grasp of Java. I felt like there were other skills, but they’re unremarkable in the circles in which I’ll spend my time—above average vocabulary, general knowledge base, and dedication to studying for my school’s environment, and Less Wrong memes.
I simply averaged the four numbers on those countries. I’ll edit the post to have a weighted average by number of nets distributed. I don’t know how to account for disproportionate early deaths in my calculations, since I don’t have data on the typical lifespan of, for instance, a Zambian who survives childhood.
I refrained from rounding until the end so that if people were following my calculations starting from partway through they would arrive at the same answers. It wasn’t really necessary, and now that you mention it it does raise questions about significant digits, so I’ll round midway figures for display in the future.
Good point on the life expectancy being given for people currently born. I’ll edit the post to use life expectancy figures from ten years ago.
I had more ideas for Less Wrong posts, including an argument that donating to charity is more beneficial than paying for cryonics, but that assumes that the reader is altruistic. Since most people on Less Wrong are apparently not altruists, should I go ahead and post “For altruists, AMF > cryonics” here, or should I keep altruism-assuming arguments for some community more typically altruistic, like Felicifia?
I agree that existential risk is a higher priority. I used AMF in my example because its benefits are easy to accurately quantify.
Ah, thank you. Now I know more proper uses of the words “utilitarian” and “altruist”, that should help me communicate.
Edit: Just read thomblake’s comment. Now I’m back to using “utilitarian” to mean “altruistic value-maximizer”.
It is obvious that the trade off is there—I thought that people weren’t taking the option of helping people at their own expense because they didn’t know that that option caused more benefit overall than the option of having fun at others’ expense. The reason that people who know about efficient charity aren’t helping people at their own expense is instead apparently an objection to utilitarianism in general. I had thought before posting that most people at Less Wrong were utilitarians.
As for your questions, I’m a high school student, so I want to spend my money on college to increase my chances of making much more money later in life so that I can donate more to efficient charities.
From the Thiel Fellowship website: “The ideal candidate has ideas that simply cannot wait. She or he wants to change the world and has already started to do it in some fashion. We want fellows who dream big and have clear plans [...]” I’m interested in what you have in mind for your Thiel grant—some singularity-promoting project? Have you already done some work in that direction?
17, senior.
Not when thinking about it that way, though recreational research in pop sci, apologetics, and epistemology have gotten me to the point where I’m much more knowledgeable in those areas than my average classmate.
None in my family, and one in my school—the calculus and Bible teacher, very into apologetics. We argue for ~45 minutes every Tuesday thanks to my school’s shuffling schedule.
Highest paying career I can attain, don’t know what yet. Whatever school I can enter that gets me to the career.
After reading the winners of the grants and noting that they have absurd accomplishments, no.
Thank you. I had assumed that Less Wrong had no private messaging when the envelope icon near the top right corner of the interface took me to the reply to my first comment.
I’ve briefly tried to find a way to contact Shulman and failed. Is there a known way to contact him? Possibly useful information: I would prefer an IRC session over Skype. I’ve already followed the posted link, googled his name, googled his name with the word contact, looked at eightythousand.org′ s contact page, and googled his name while restricting the search to eightythousand.org.
Is the linked website right about banking being the optimal career path for professional philanthropy or is there a more efficient method of moving resources to charities? I’m especially curious since I’ll choose my degree soon.
I was confused about Solomonoff induction a while ago. Since code from any part of whatever program is running could produce whatever string is observed, why would shorter programs be more likely to have produced the observed string? My understanding of the answer I received was that, since the Turing machine would produce its output linearly starting from the beginning of the program, a program with extra code before the piece that produced the observed string would have produced a different string. This made sense at the time, but since then I’ve thought of a variant of the problem involving not knowing the full length of the string, and I don’t think that answer addresses it.
Since the code that produces the string can be arbitrarily long, and when trying to apply the principles of Solomonoff induction as a general means of induction outside of computer science we often can’t observe the full string that whatever the code producing our observed string may have produced (for example, trying to find laws of physics, or the source of some event that happened in an uncontained / low-surveillance environment), why is a shorter program more likely? The program’s length could be a billion times that of the shortest program to produce the string and be producing a ton of unobserved effects. I could wave my hands, say something about Occam’s razor, and move on, but I thought Solomonoff induction was supposed to explain Occam’s razor.