“I need access to the restricted section, I don’t want another one of my friends to die”
I would suspect that an argument along those lines would be much more likely to succeed if Quirrell hadn’t given his instructions.
“I need access to the restricted section, I don’t want another one of my friends to die”
I would suspect that an argument along those lines would be much more likely to succeed if Quirrell hadn’t given his instructions.
I have read around and I still can’t really tell what Westergaardian theory is. I can see how harmony fails as a framework (it doesn’t work very well for a lot of music I have tried to analyze) so I think there is a good chance that Westergaard is (more) right. However, other than the fact that there are these things called lines, and that there exist rules (I have not actually found a list or description of such rules) for manipulating them. I am not sure how this is different from counterpoint. I don’t want to go and read a textbook to figure this out, I would rather read ~5-10 pages of exposition and big-picture
Just by telling everyone to keep Harry away from it improves the security
In that link, is that the 3 dimensional analog of living on a 2D plane with a hole in it, and when you enter the hole, you flip to the other side of the plane? (Or, take a torus, cut along the circle farthest from the center, and extend the new edges out to infinity?)
And mentioned numerous times.
Nitpick: I would consider the Weierstrass function a different sort of pathology than non-standard models or Banach-Tarski—a practical pathology rather than a conceptual pathology. The Weierstrass function is just a fractal. It never smooths out no matter how much you zoom in.
I think any correct use of “need” is either implicitly or explicitly a phrase of the form “I need X (in order to do Y)”.
Why does he think of beefing up the restricted section’s security only after his conversation with Harry? What did he learn?
I also don’t see bringing Harry’s parents to Hogwarts as being terribly predictable.
There is no way Harry would get expelled. He is at Hogwarts for his protection—to be close to Dumbledore—not so that he can go to school.
Only if they autopsy the troll and figure out it was transfiguration and not something else. But I’m pretty sure Quirrellmort already knows Harry can do that.
And knowing what it can do—killing a dementor, which they didn’t see, though someone might be able to figure out that his super patronus is the reason why dementors are afraid of him.
Wasn’t exactly a falling rock, more like a rapidly expanding jawbreaker.
She was still within the wards/within hogwarts grounds.
Is it just me or has no one in the story really considered that Quirrell = mort? Like, why does the hypothesis that Quirrell = Grindelwald briefly come up first? Why is everyone blindly trusting him even when they think he might be responsible for some of the bad stuff going on? It seems like everyone is doing some serious mental gymnastics to avoid considering that he is actually seriously evil (esp. Hogwarts faculty and Harry).
I think quite a few meetups have at least one person that has gone to a workshop. There could be some teaching how to teach at the workshop so that when the go back to the meetups, they can teach there.
When was this last updated? Has anything new come out since?
Why do you think that is a wrong question? I am mostly asking because I want something interesting to write about, that I would be motivated to write.
The math that I am having fun with I don’t know thoroughly enough to explain (and I am learning it from a really good piece of exposition).
The rationality one looks like fun, I will see if I can do some of it. First step, hack it into pieces so I am not working on a massive supergoal project, but a small project instead.
The two things that come to mind are things that I am still learning. General category theory (rather than category theory for the purpose of x), and a higher level structural and general viewpoint on Bayes (rather than basic articles on how to compute Bayes theorem and what it means). Also something on what actually happens when you extend mathematical logic using Bayesian probability. I could probably start on the second one right now...
I see what you are saying, but I would be more motivated if it felt like I was doing useful work, and I don’t really know what to write about. So I kind of am looking for inspiration/motivation and ideas.
How do you upgrade people into rationalists? In particular, I want to upgrade some younger math-inclined people into rationalists (peers at university). My current strategy is:
incidentally name drop my local rationalist meetup group, (ie. “I am going to a rationalist’s meetup on Sunday”)
link to lesswrong articles whenever relevant (rarely)
be awesome and claim that I am awesome because I am a rationalist (which neglects a bunch of other factors for why I am so awesome)
when asked, motivate rationality by indicating a whole bunch of cognitive biases, and how we don’t naturally have principles of correct reasoning, we just do what intuitively seems right
This is quite passive (other than name dropping and article linking) and mostly requires them to ask me about it first. I want something more proactive that is not straight up linking to Lesswrong, because the first thing they go to is The Simple Truth and immediately get turned off by it (The Simple Truth shouldn’t be the first post in the first sequence that you are recommended to read on Lesswrong). This has happened a number of times.