I suppose the first step would be being more instrumentally rational about what I should be instrumentally rational about. What are the goals that are most worth achieving, or, what are my values?
Turgurth
Reading the Sequences has improved my epistemic rationality, but not so much my instrumental rationality. What are some resources that would help me with this? Googling is not especially helping. Thanks in advance for your assistance.
Try some exposure therapy to whatever it is you’re often afraid of. Can’t think of what you’re often afraid of? I’d be surprised if you’re completely immune to every common phobia.
Advice from the Less Wrong archives.
Very interested.
Also, here’s a bit of old discussion on the topic I found interesting enough to save.
If you can’t appeal to reason to make reason appealing, you appeal to emotion and authority to make reason appealing.
I don’t think there are any such community pressures, as long as a summary accompanies the link.
Thanks!
I recently noticed “The Fable of the Dragon-Tyrant” under the front page’s Featured Articles section, which caused me to realize that there’s more to Featured Articles than the Sequences alone. This particular article (an excellent one, by the way) is also not from Less Wrong itself, yet is obviously relevant to it; it’s hosted on Nick Bostrom’s personal site.
I’m interested in reading high-quality non-Sequences articles (I’m making my way through the Sequences separately using the [SEQ RERUN] feature) relevant to Less Wrong that I might have missed, so is there an archive of Featured Articles? I looked, but was unable to find one.
Michaelcurzi’s How to avoid dying in a car crash is relevant. Bentarm’s comment on that thread makes an excellent point regarding coronary heart disease.
There is also Eliezer Yudkowsky’s You Only Live Twice and Robin Hanson’s We Agree: Get Froze on cryonics.
I have a few questions, and I apologize if these are too basic:
1) How concerned is SI with existential risks vs. how concerned is SI with catastrophic risks?
2) If SI is solely concerned with x-risks, do I assume correctly that you also think about how cat. risks can relate to x-risks (certain cat. risks might raise or lower the likelihood of other cat. risks, certain cat. risks might raise or lower the likelihood of certain x-risks, etc.)? It must be hard avoiding the conjunction fallacy! Or is this sort of thing more what the FHI does?
3) Is there much tension in SI thinking between achieving FAI as quickly as possible (to head off other x-risks and cat. risks) vs. achieving FAI as safely as possible (to head off UFAI), or does one of these goals occupy signficantly more of your attention and activities?
Edited to add: thanks for responding!
One possible alternative would be choosing to appear in the Americas.
To add to Principle #5, in a conversational style: “if something exists, that something can be quantified. Beauty, love, and joy are concrete and measurable; you just fail at it. To be fair, you lack the scientific and technological means of doing so, but - failure is failure. You failing at quantification does not devalue something of value.”