I disagree with your treatment of statement #5. It’s hard to explain directly why, so let me analogize:
Humans aren’t perfectly rational. However, we can strive to be more rational, even though we as humans can never reach perfect rationality.
Now, replace “rational” and “rationality” in the above with “moral” and “morality” and therein lies the reasoning. To say that you could be doing more for charity, or you could be nicer to your fellow man, &c., is exactly saying that you could be more moral. But this “could” is in a purely abstract sense; humans are exactly as moral (and as rational) as they can by given their own brains.
If you feel that giving 5% of your income to charity isn’t enough, and that the moral ideal is 10%, try giving 6%. Make the best choices you can make with the willpower you have. The choice isn’t between giving 5% and 10%; you don’t have that much willpower in the bank. The choice is between giving 5% and 6%. The better choice is 6%. Now you’ve made a better choice; feel happy. Feeling guilty about not having willpower doesn’t contribute to the development of willpower. Rather, try for the proper exercise of available willpower, and the slow reshaping of the self that results.
humans are exactly as moral (and as rational) as they can by given their own brains
I’m not sure of that (unless you use a very restrictive definition of can which in a deterministic universe would make it synonymous with are, but down that path Fatalistic Decision Theory (“choice is futile”) lies).
Nope, I’m talking about the humans’ in questions subjective “nows”, not their futures. Although if a person isn’t particularly rational and has never heard of rationality and if you mentioned it to him he wouldn’t feel particularly motivated to become more rational has a pretty irrational-looking future, and in such case there’s no choice to make, no will, only a default path.
I disagree with your treatment of statement #5. It’s hard to explain directly why, so let me analogize:
Now, replace “rational” and “rationality” in the above with “moral” and “morality” and therein lies the reasoning. To say that you could be doing more for charity, or you could be nicer to your fellow man, &c., is exactly saying that you could be more moral. But this “could” is in a purely abstract sense; humans are exactly as moral (and as rational) as they can by given their own brains.
Thus it is written:
I’m not sure of that (unless you use a very restrictive definition of can which in a deterministic universe would make it synonymous with are, but down that path Fatalistic Decision Theory (“choice is futile”) lies).
Nope, I’m talking about the humans’ in questions subjective “nows”, not their futures. Although if a person isn’t particularly rational and has never heard of rationality and if you mentioned it to him he wouldn’t feel particularly motivated to become more rational has a pretty irrational-looking future, and in such case there’s no choice to make, no will, only a default path.