If it is trivial to do better with a few moments of reflection then make with the interesting comments. I see your near universal non-specific disdainful comments as a significant part of why LW is less pleasant to post to.
RomeoStevens
Strongly disagree. I would be more enthused about lesswrong if it had more attempts at futurism.
I recommend tabooing the word free in order to think more clearly.
detecting previously addressed ideas is a major impediment due to non-obvious terminology.
Increase the delay on reward loops with your phone by activating developer settings and setting colorspaces to black and white and setting animation speeds to 2x or 5x. I tried going back to 2x after months with 5x and it felt palpably neurosis inducing.
This is my favorite comment in a long while.
Doing yoga improved my rationality skills. If I were rewriting optimal exercise I’d add a section titled Retraining your Broken CNS.
Lossy compression isn’t telos free though.
You can play with this right now and simultaneously dissolve some negative judgements. Think about the function of psychics/fortune tellers in poor communities. What do you think is going on there phenomenologically when you turn off your epistemic rigor goggles? Also try it with prayer. What might you conclude about prayer if you were a detached alien? Confession is a pretty interesting one too. What game theoretic purpose might it be serving in a community of 150 people? I’ve found these types of exercises pretty valuable. Especially the less condescending I manage to be.
and suffering-focused EAs do less stuff that tends to lead to the destruction of the world.
In support of this, my system 1 reports that if it sees more intelligent people taking S-risk seriously it is less likely to nuke the planet if it gets the chance. (I’m not sure I endorse nuking the planet, just reporting emotional reaction).
X-risk is still plausibly worse in that we need to survive to reach as much of the universe as possible and eliminate suffering in other places.
Edit: Brian talks about this here: https://foundational-research.org/risks-of-astronomical-future-suffering/#Spread_of_wild_animals-2
Related: perverse ontological lock-in. Building things on top of ontological categories tends to cement them since we think we need them to continue getting value from the thing. But if the folk ontology doesn’t carve reality at the joints there will be friction present in all the stories/predictions/expectations built up out of those ontological pieces along with an unwillingness to drop the folk ontology on the belief that you will lose all the value of the things you’ve built on top. One model of the punctuated equilibrium model of psychological development is periodic rebasing operations.
Agree about creation:critique ratio. Generativity/creativity training is the rationalist communities’ current bottleneck IMO.
Meta: if something has tons of evidence and you can’t bring yourself to try it for a month ask yourself TDT-wise what your life looks like with and without skill of ‘try seemingly good ideas for a month.’
Babbler reality has a strong pull because it doles out tasty treats.
It does have access to your nervous system since your nervous system can be rewired via backdriving inputs from your perceptions.
Olivia Cabane’s books are where I’d start. Then Kegan’s Immunity to Change.
Non-DSM: Opening the Heart of Compassion. People with psychotherapy chops explain the buddhist model of pathology in an entertaining way.
I think values are confusing because they aren’t a natural kind. The first decomposition that made sense was 2 axes: stated/revealed and local/global
stated local values are optimized for positional goods, stated global values are optimized for alliance building, revealed local are optimized for basic needs/risk avoidance, revealed global barely exist and when they do are semi-random based on mimesis and other weak signals (humans are not automatically strategic etc.)
Trying to build a coherent picture out of various outputs of 4 semi independent processes doesn’t quite work. Even stating it this way reifies values too much. I think there are just local pattern recognizers/optimizers doing different things that we have globally applied this label of ‘values’ to because of their overlapping connotations in affordance space and because switching between different levels of abstraction is highly useful for calling people out in sophisticated hard to counter ways in monkey politics.
Also useful to think of local/global as Dyson’s birds and frogs, or surveying vs navigation.
I’m unfamiliar with existing attempts at value decomposition if anyone knows of papers etc.
On predictions, humans treating themselves and others as agents seems to lead to a lot of problems. Could also deconstruct poor predictions based on which sub-system it runs into the limits of: availability, working memory, failure to propagate uncertainty, inconsistent time preferences...can we just invert the bullet points from superforecasting here?
This is excellent! Can this reasoning be improved by attempting to map the overlaps between x-risks more explicitly? The closest I can think of is some of turchin’s work.