Rationality has plenty to say about whether abortion is morally permissible.
Are fetuses sentient, for example? Do they feel pain?
What would happen socially, economically, if we outlawed abortion? Who would benefit? Who would be harmed? How much?
If you’re a strict utilitarian, moral problems reduce to factual problems. But even if you’re not, facts often have a great deal to say about morality. This is especially true in issues like economics and foreign policy, where the goals are largely undisputed and it’s the facts and methods that are in question. I challenge you to find an American politician who says he wants to increase poverty or undermine American national security. “We need 10% of Americans to starve! And by the way, I hope China invades!” (I guess I should hedge my bets and say that such bizarre people may exist—after all, Creationists do—but they aren’t likely to get a lot of votes from any party.)
Also, rationality can assess the arguments used for and against political positions. If one side is using a lot of hard data and the other one is making a lot of logical fallacies… that’s should give you a pretty good idea of which side to be on. (It’s no guarantee, but what is?)
First you need to decide what gives utility points to you, which is a moral problem.
I consider most computer programs to be sentient, with their work memory being sentience, i also see pain as just a bit of programming that makes creatures avoid things causing it, not different from some regulators i have programmed. Therefore i don´t care if fetuses are sentient or feel pain, so for me that does not affect the utility calculation. But most people do not agree.
Rationality has plenty to say about whether abortion is morally permissible.
Are fetuses sentient, for example? Do they feel pain? What would happen socially, economically, if we outlawed abortion? Who would benefit? Who would be harmed? How much?
If you’re a strict utilitarian, moral problems reduce to factual problems. But even if you’re not, facts often have a great deal to say about morality. This is especially true in issues like economics and foreign policy, where the goals are largely undisputed and it’s the facts and methods that are in question. I challenge you to find an American politician who says he wants to increase poverty or undermine American national security. “We need 10% of Americans to starve! And by the way, I hope China invades!” (I guess I should hedge my bets and say that such bizarre people may exist—after all, Creationists do—but they aren’t likely to get a lot of votes from any party.)
Also, rationality can assess the arguments used for and against political positions. If one side is using a lot of hard data and the other one is making a lot of logical fallacies… that’s should give you a pretty good idea of which side to be on. (It’s no guarantee, but what is?)
First you need to decide what gives utility points to you, which is a moral problem. I consider most computer programs to be sentient, with their work memory being sentience, i also see pain as just a bit of programming that makes creatures avoid things causing it, not different from some regulators i have programmed. Therefore i don´t care if fetuses are sentient or feel pain, so for me that does not affect the utility calculation. But most people do not agree.