I really like Sean Carroll’s The Big Picture as an intro to rationality and naturalism for the general public. It covers pretty much all the topics in RfAItZ, along with several others (esp. physics stuff). It’s shorter and a lot less technical than RfAItZ, but it’s readable and I thought it does a good job of laying out the basic perspectives.
iarwain1
Try 80,000 Hours’ guide, especially here.
In our world, classical mechanics (Newton + Maxwell and their logical implications) holds for most everyday experiences at slow speeds (relative to the speed of light) and at scales larger than the atomic realm.*
Question: Is this necessarily true for every possible world that matches our macroscopic physical observations? Is it possible to construct an alternative set of physical laws such that the world would function exactly as our world does on a macroscopic, everyday level, but that would violate Newton’s laws or Maxwell’s laws or thermodynamics or the like? Again, I’m not talking about violating those laws in extreme cases (close to the speed of light, tiny scales) where these laws don’t really apply even in our world. I’m talking about a world where even the everyday approximate equations of physics, as expressed in classical mechanics, do not apply.
Said another way: If you messed with Newton’s equations or Maxwell’s equations or thermodynamics even a little bit, would the world necessarily function differently in such a way that we could tell that you’d messed with the laws? Would it function so differently as to be unrecognizable?
Or said yet another way: Do our macroscopic experiences entail that the equations of classical mechanics are at least a very good approximation of the underlying physics?
I’d especially appreciate sources / references / links to further reading.
[*Leaving aside the types of modern technology which bring quantum mechanical effects into the everyday observable world.]
Check out 80,000 Hours. For finances in particular see their career review for trading in quantitative hedge funds.
Took survey. Didn’t answer all the questions because I suspend judgment on a lot of issues and there was no “I have no idea” option. Some questions did have an “I don’t have a strong opinion” option, but I felt a lot more of them should also have that option.
I’m more interested more in epistemic rationality concepts rather than practical life advice, although good practical advice is always useful.
I’m an undergrad going for a major in statistics and minors in computer science and philosophy. I also read a lot of philosophy and cognitive science on the side. I don’t have the patience to read through all of the LW sequences. Which LW sequences / articles do you think are important for me to read that I won’t get from school or philosophy reading?
So probability of either Trump or Cruz is 100%?
It’s open source. Right now I only know very basic Python, but I’m taking a CS course this coming semester and I’m going for a minor in CS. How hard do you think it would be to add in other distributions, bounded values, etc.?
Link: Introducing Guesstimate, a Spreadsheet for Things That Aren’t Certain
How useful do you think this actually is?
?
So it sounds like you’re only disagreeing with the OP in degree. You agree with the OP that a lot of scientists should be learning more about cognitive biases, better statistics, epistemology, etc., just as we are trying to do on LW. You’re just pointing out (I think) that the “informed laymen” of LW should have some humility because (a) in many cases (esp. for top scientists?) the scientists have indeed learned lots of rationality-relevant subject matter, perhaps more than most of us on LW, (b) domain expertise is usually more important than generic rationality, and (c) top scientists are very well educated and very smart.
Is that correct?
In many cases I’d agree it’s pretty crazy, especially if you’re trying to go up against top scientists.
On the other hand, I’ve seen plenty of scientists and philosophers claim that their peers (or they themselves) could benefit from learning more about things like cognitive biases, statistics fallacies, philosophy of science, etc. I’ve even seen experts claim that a lot of their peers make elementary mistakes in these areas. So it’s not that crazy to think that by studying these subjects you can have some advantages over some scientists, at least in some respects.
Of course that doesn’t mean you can be sure that you have the advantage. As I said, probably in most cases domain expertise is more important.
I still haven’t figured out what you have against Bayesian epistemology. It’s not like this is some sort of LW invention—it’s pretty standard in a lot of philosophical and scientific circles, and I’ve seen plenty of philosophers and scientists who call themselves Bayesians.
Solomonoff induction is one of those ideas that keeps circulating here, for reasons that escape me.
My understanding is that Solomonoff induction is usually appealed to as one of the more promising candidates for a formalization of Bayesian epistemology that uses objective and specifically Occamian priors. I haven’t heard Solomonoff promoted as much outside LW, but other similar proposals do get thrown around by a lot of philosophers.
Bayesian methods didn’t save Jaynes from being terminally confused about causality and the Bell inequalities.
Of course Bayesianism isn’t a cure-all by itself, and I don’t think that’s controversial. It’s just that it seems useful in many fundamental issues of epistemology. But in any given domain outside of epistemology (such as causation or quantum mechanics), domain-relevant expertise is almost certainly more important. The question is more whether domain expertise plus Bayesianism is at all helpful, and I’d imagine it depends on the specific field. Certainly for fundamental physics it appears that Bayesianism is often viewed as at least somewhat useful (based on the conference linked by the OP and by a lot of other things I’ve seen quoted from professional physicists).
and the funding
A Kickstarter, perhaps?
Not sure what you mean by this. I actually posted the meeting for the Baltimore area myself.
The Baltimore and Washington DC meetups do show up if I click on “Nearest Meetups”, just that they appear in the 5th and 8th spots. That list appears to be sorted first by date and then alphabetically. The San Antonio meetup appears at the #4 slot, and the Durham meetup does not appear at all.
Basically the “nearest” part of nearest meetups seems to be completely broken.
I’m from Baltimore, MD. We have a Baltimore meetup coming up Jan 3 and a Washington DC meetup this Sun Dec 13. So why do the two meetups listed in my “Nearest Meetups” sidebar include only a meetup in San Antonio for Dec 13 and a meetup in Durham NC for Sep 17 2026 (!)?
On the science of how to learn: Make It Stick.
See this article (full article available from sidebar), which argues that although conventional wisdom gives religion the advantage here, the reality may not be so clear-cut.
Link is messed up.