3. Those who are more able to comprehend and use these models are therefore of a higher agency/utility and higher moral priority than those who cannot. [emphasis mine]
This (along with saying “dignity” implies “moral worth” in Death w/ Dignity post), is confusing to me. Could you give a specific example of how you’d treat differently someone who has more or less moral worth (e.g. give them more money, attention, life-saving help, etc)?
One thing I could understand from your Death w/ Dignity excerpt is he’s definitely implying a metric that scores everyone, and some people will score higher on this metric than others. It’s also common to want to score high on these metrics or feel emotionally bad if you don’t score high on these metrics (see my post for more). This could even have utility, like having more “dignity” gets you a thumbs up from Yudowsky or have your words listened to more in this community. Is this close to what you mean at all?
Rationalism is path-dependent
I was a little confused on this section. Is this saying that human’s goals and options (including options that come to mind) change depending on the environment, so rational choice theory doesn’t apply?
Games and Game Theory
I believe the thesis here is that game theory doesn’t really apply in real life, that there are usually extra constraints or freedoms in real situations that change the payoffs.
I do think this criticism is already handled by trying to “actually win” and “trying to try”; though I’ve personally benefitted specifically from trying to try and David Chapman’s meta-rationality post.
Probability and His Problems
The idea of deference (and when to defer) isn’t novel (which is fine! Novelty is another metric I’m bringing up, but not important for everything one writes to be). It’s still useful to apply Bayes theorem to deference. Specifically evidence that convinces you to trust someone should imply that there’s evidence that convinces you to not trust them.
This is currently all I have time for; however, my current understanding is that there is a common interpretation of Yudowsky’s writings/The sequences/LW/etc that leads to an over-reliance on formal systems that will invevitably fail people. I think you had this interpretation (do correct me if I’m wrong!), and this is your “attempt to renegotiate rationalism ”.
There is the common response of “if you re-read the sequences, you’ll see how it actually handles all the flaws you mentioned”; however, it’s still true that it’s at least a failure in communication that many people consistently mis-interpret it.
Glad to hear you’re synthesizing and doing pretty good now:)
Thank you for the feed back! I am of course happy for people to copy over the essay
> Is this saying that human’s goals and options (including options that come to mind) change depending on the environment, so rational choice theory doesn’t apply?
More or less, yes, or at least that it becomes very hard to apply it in a way that isn’t either highly subjective or essentially post-hoc arguing about what you ought to have done (hidden information/hindsight being 20⁄20)
> This is currently all I have time for; however, my current understanding is that there is a common interpretation of Yudowsky’s writings/The sequences/LW/etc that leads to an over-reliance on formal systems that will invevitably fail people. I think you had this interpretation (do correct me if I’m wrong!), and this is your “attempt to renegotiate rationalism ”.
I’ve definitely met people who take the more humble/humility/heuristics driven approach which I outline in the essay and still call themselves rationalists. On the other hand, I have also seen a whole lot of people take it as some kind of mystic formula to organise their lives around. I guess my general argument is that rationalism should not be constructed on top of such a formal basis (cf. the section about heuristics not theories in the essay) and then “watered down” to reintroduce ideas of humility or nuance or path-dependence. And in part 2 I argue that the core principles of rationalism as I see them (without the “watering down” of time and life experience) make it easy to fall down certain dangerous pathways.
And as for the specific implications of “moral worth”, here are a few:
You take someone’s opinions more seriously
You treat them with more respect
When you disagree, you take time to outline why and take time to pre-emptively “check yourself”
When someone with higher moral worth is at risk you think this is a bigger problem, compared against the problem of a random person on earth being at risk
This (along with saying “dignity” implies “moral worth” in Death w/ Dignity post), is confusing to me. Could you give a specific example of how you’d treat differently someone who has more or less moral worth (e.g. give them more money, attention, life-saving help, etc)?
One thing I could understand from your Death w/ Dignity excerpt is he’s definitely implying a metric that scores everyone, and some people will score higher on this metric than others. It’s also common to want to score high on these metrics or feel emotionally bad if you don’t score high on these metrics (see my post for more). This could even have utility, like having more “dignity” gets you a thumbs up from Yudowsky or have your words listened to more in this community. Is this close to what you mean at all?
Rationalism is path-dependent
I was a little confused on this section. Is this saying that human’s goals and options (including options that come to mind) change depending on the environment, so rational choice theory doesn’t apply?
Games and Game Theory
I believe the thesis here is that game theory doesn’t really apply in real life, that there are usually extra constraints or freedoms in real situations that change the payoffs.
I do think this criticism is already handled by trying to “actually win” and “trying to try”; though I’ve personally benefitted specifically from trying to try and David Chapman’s meta-rationality post.
Probability and His Problems
The idea of deference (and when to defer) isn’t novel (which is fine! Novelty is another metric I’m bringing up, but not important for everything one writes to be). It’s still useful to apply Bayes theorem to deference. Specifically evidence that convinces you to trust someone should imply that there’s evidence that convinces you to not trust them.
This is currently all I have time for; however, my current understanding is that there is a common interpretation of Yudowsky’s writings/The sequences/LW/etc that leads to an over-reliance on formal systems that will invevitably fail people. I think you had this interpretation (do correct me if I’m wrong!), and this is your “attempt to renegotiate rationalism ”.
There is the common response of “if you re-read the sequences, you’ll see how it actually handles all the flaws you mentioned”; however, it’s still true that it’s at least a failure in communication that many people consistently mis-interpret it.
Glad to hear you’re synthesizing and doing pretty good now:)
Thank you for the feed back! I am of course happy for people to copy over the essay
> Is this saying that human’s goals and options (including options that come to mind) change depending on the environment, so rational choice theory doesn’t apply?
More or less, yes, or at least that it becomes very hard to apply it in a way that isn’t either highly subjective or essentially post-hoc arguing about what you ought to have done (hidden information/hindsight being 20⁄20)
> This is currently all I have time for; however, my current understanding is that there is a common interpretation of Yudowsky’s writings/The sequences/LW/etc that leads to an over-reliance on formal systems that will invevitably fail people. I think you had this interpretation (do correct me if I’m wrong!), and this is your “attempt to renegotiate rationalism ”.
I’ve definitely met people who take the more humble/humility/heuristics driven approach which I outline in the essay and still call themselves rationalists. On the other hand, I have also seen a whole lot of people take it as some kind of mystic formula to organise their lives around. I guess my general argument is that rationalism should not be constructed on top of such a formal basis (cf. the section about heuristics not theories in the essay) and then “watered down” to reintroduce ideas of humility or nuance or path-dependence. And in part 2 I argue that the core principles of rationalism as I see them (without the “watering down” of time and life experience) make it easy to fall down certain dangerous pathways.
And as for the specific implications of “moral worth”, here are a few:
You take someone’s opinions more seriously
You treat them with more respect
When you disagree, you take time to outline why and take time to pre-emptively “check yourself”
When someone with higher moral worth is at risk you think this is a bigger problem, compared against the problem of a random person on earth being at risk