The subjective part probably could have been shortened, but I thought it was at least partly necessary in order to give proper context, as in “why are you trying to define rationality when this whole web site is supposed to be about that?” or similar.
The question is, was it informative? If not, then how did it fail in that goal?
Maybe I should have started with the conclusions and then explained how I got there.
I felt like I didn’t get the informativeness I bargained for, somehow. Your list of requirements for a rational conversation and your definition of a moral rational decision seem reasonable, but straightforward; even after reading your long exposition, I didn’t really find out why these are interesting definitions to arrive at.
EDIT: One caveat is that it’s not totally clear to me where the line between “ethical” goals and other goals lies, if there is such a line. Consequently, I don’t know how to distinguish between a moral rational decision and just a plain old rational decision. Are ethical goals ones that have a larger influence on other people?
(In particular, I didn’t understand the point of contention in the comment thread you linked to, that prompted this post. It seems pretty obvious to me that rationality in a moral context is the same as rationality in any other context; making decisions that are best suited to fulfilling your goals. You never really did address his final question of “how can a terminal value be rational” (my answer would be that it’s nonsense to call a value rational or irrational.))
I’m not sure it’s important that my conclusions be “interesting”. The point was that we needed a guideline (or set thereof), and as far as I know this need has not been previously met.
Once we agree on a set of guidelines, then I can go on to show examples of rational moral decisions—or possibly not, in which case I update my understanding of reality.
Re ethical vs. other kinds: I’m inclined to agree. I was answering an argument that there is no such thing as a rational moral decision. Jack drew this distinction, not me. Yes, I took way too long coming around to the conclusion that there is no distinction, and I left too much of the detritus of my thinking process lying around in the final essay...
...but on the other hand, it seemed perhaps a little necessary to show a bit of my work, since I was basically coming around to saying “no, you’re wrong”.
If what you’re saying is that there should have been no point of contention, then I agree with that too.
“How can a terminal value be rational?”: As far as this argument goes, I assert no such thing. I’m not clear on how that question is important for supporting the point I was trying to make in that argument, much less this one.
I have another argument for the idea that it’s not rational to argue on the basis of a terminal value which is not at least partly shared by your audience—and that if your audience is potentially “all humanity”, then your terminal value should probably be something approaching “the common good of all humanity”. But that’s not a part of this argument.
I could write a post on that too, but I think I need to establish the validity of this point (i.e. how to spot the loonie) first, because that point (rationality of terminal values) builds on this one.
The subjective part probably could have been shortened, but I thought it was at least partly necessary in order to give proper context, as in “why are you trying to define rationality when this whole web site is supposed to be about that?” or similar.
The question is, was it informative? If not, then how did it fail in that goal?
Maybe I should have started with the conclusions and then explained how I got there.
I felt like I didn’t get the informativeness I bargained for, somehow. Your list of requirements for a rational conversation and your definition of a moral rational decision seem reasonable, but straightforward; even after reading your long exposition, I didn’t really find out why these are interesting definitions to arrive at.
EDIT: One caveat is that it’s not totally clear to me where the line between “ethical” goals and other goals lies, if there is such a line. Consequently, I don’t know how to distinguish between a moral rational decision and just a plain old rational decision. Are ethical goals ones that have a larger influence on other people?
(In particular, I didn’t understand the point of contention in the comment thread you linked to, that prompted this post. It seems pretty obvious to me that rationality in a moral context is the same as rationality in any other context; making decisions that are best suited to fulfilling your goals. You never really did address his final question of “how can a terminal value be rational” (my answer would be that it’s nonsense to call a value rational or irrational.))
I’m not sure it’s important that my conclusions be “interesting”. The point was that we needed a guideline (or set thereof), and as far as I know this need has not been previously met.
Once we agree on a set of guidelines, then I can go on to show examples of rational moral decisions—or possibly not, in which case I update my understanding of reality.
Re ethical vs. other kinds: I’m inclined to agree. I was answering an argument that there is no such thing as a rational moral decision. Jack drew this distinction, not me. Yes, I took way too long coming around to the conclusion that there is no distinction, and I left too much of the detritus of my thinking process lying around in the final essay...
...but on the other hand, it seemed perhaps a little necessary to show a bit of my work, since I was basically coming around to saying “no, you’re wrong”.
If what you’re saying is that there should have been no point of contention, then I agree with that too.
“How can a terminal value be rational?”: As far as this argument goes, I assert no such thing. I’m not clear on how that question is important for supporting the point I was trying to make in that argument, much less this one.
I have another argument for the idea that it’s not rational to argue on the basis of a terminal value which is not at least partly shared by your audience—and that if your audience is potentially “all humanity”, then your terminal value should probably be something approaching “the common good of all humanity”. But that’s not a part of this argument.
I could write a post on that too, but I think I need to establish the validity of this point (i.e. how to spot the loonie) first, because that point (rationality of terminal values) builds on this one.