If you have an argument why Pancritical rationalism applies better in preferences and behaviors than beliefs, I’m all ears.
We start with a preference or a belief or a behavior (or something else), so we never have a choice between doing pancriticial rationalism with a preference or doing pancritcal rationalism with a belief. Comparing the two is therefore not relevant. What is relevant is whether pancritical rationalism with preferences is worthwhile.
Pancritical rationalism is nontrivial for preferences because we presently have multiple possible criticisms, and none of them conclusively prove that something is wrong with the preference. So the choices I can see are:
We could choose not to not talk about preferences at all. Preferences are important, so that’s not good.
We could talk about preferences without understanding the nature of the conversation. The objective morality bullshit that has been argued a few times seems to be a special case of this. I wouldn’t want to participate in that again.
We can do pancritical rationalism with preferences.
I would really like a better alternative, but I do not see one.
For beliefs and behaviors, I agree at this point that PCR doesn’t give much leverage. We can trivialize PCR for beliefs down to Bayes’ rule and choosing a prior. We can trivialize PCR for behaviors down to the rule of choosing the behavior that you believe will best give you your preferences. If you don’t want to assume rationality and unbounded computational resources, there might be more criticisms of belief and behavior that are worthwhile, but it’s a small win at best and probably not worth talking about given that people don’t seem to be getting the main point.
We start with a preference or a belief or a behavior (or something else), so we never have a choice between doing pancriticial rationalism with a preference or doing pancritcal rationalism with a belief. Comparing the two is therefore not relevant. What is relevant is whether pancritical rationalism with preferences is worthwhile.
Pancritical rationalism is nontrivial for preferences because we presently have multiple possible criticisms, and none of them conclusively prove that something is wrong with the preference. So the choices I can see are:
We could choose not to not talk about preferences at all. Preferences are important, so that’s not good.
We could talk about preferences without understanding the nature of the conversation. The objective morality bullshit that has been argued a few times seems to be a special case of this. I wouldn’t want to participate in that again.
We can do pancritical rationalism with preferences.
I would really like a better alternative, but I do not see one.
For beliefs and behaviors, I agree at this point that PCR doesn’t give much leverage. We can trivialize PCR for beliefs down to Bayes’ rule and choosing a prior. We can trivialize PCR for behaviors down to the rule of choosing the behavior that you believe will best give you your preferences. If you don’t want to assume rationality and unbounded computational resources, there might be more criticisms of belief and behavior that are worthwhile, but it’s a small win at best and probably not worth talking about given that people don’t seem to be getting the main point.