There’s an interesting duality between morality as “the belief that truthseeking is pragmatically important to society” and morality as the result of social truthseeking, which is closer to the usual sense, or rather what the usual sense would ideally be. I’d like to see this explored further if anyone has a link in mind.
The LesssWrong FAQ indicated that there is value in replying to old content, so I’m posting anyway. Context might be in order, so here’s what we are talking about:
I tend to be suspicious of morality as a motivation for rationality
You and I had a similar take on this bit of Yudkowsky’s post. Maybe you would call my stance “truthseeking as the result of morality” instead of your “morality as the result of social truthseeking”.
The problem Yudkowsky is describing sounds like it comes from entangling the “logical” archetype with “morality”. This means any behavior which differs from this archetype becomes “immoral”, regardless of whether it is actually Bayesian reasoning or not. Personally, I would phrase this as “declaring rationality to be (a) moral value”. This specifically excludes cases where people place intrinsic value on some specific result, and then place instrumental moral value on rationality, as a tool to achieve the desired results. This is much what effective altruism is doing, after all.
There’s an interesting duality between morality as “the belief that truthseeking is pragmatically important to society” and morality as the result of social truthseeking, which is closer to the usual sense, or rather what the usual sense would ideally be. I’d like to see this explored further if anyone has a link in mind.
The LesssWrong FAQ indicated that there is value in replying to old content, so I’m posting anyway. Context might be in order, so here’s what we are talking about:
You and I had a similar take on this bit of Yudkowsky’s post. Maybe you would call my stance “truthseeking as the result of morality” instead of your “morality as the result of social truthseeking”.
The problem Yudkowsky is describing sounds like it comes from entangling the “logical” archetype with “morality”. This means any behavior which differs from this archetype becomes “immoral”, regardless of whether it is actually Bayesian reasoning or not. Personally, I would phrase this as “declaring rationality to be (a) moral value”. This specifically excludes cases where people place intrinsic value on some specific result, and then place instrumental moral value on rationality, as a tool to achieve the desired results. This is much what effective altruism is doing, after all.
Hmm, couldn’t find a link directly on this site. Figured someone else might want it too (although a google search did kind of solve it instantly).