I don’t think any bumper sticker successfully encapsulates my terminal values. I’m highly sympathetic to ethical pluralism and particularism. I value fairness and happiness (politically I’m a cosmopolitan Rawlsian liberal) with additional values of freedom and honesty which under certain conditions can trump fairness and happiness. I also value the existence of what I would recognize as humanity and limiting the possibility of the destruction of humanity can sometimes trump all of the above. Values weighted toward myself, my family and friends. It’s possible all of these things could be reduced to more fundamental values, I’m not sure. There are cases where I have no good procedure for evaluating which outcome is more desirable.
My terminal values are to minimize suffering and maximize individual freedom and ability to create, explore, and grow in wisdom via learning about the universe.
It is worth noting, if you think these are rationally justifiable somehow, that maximizing two different values is going to leave with an incomplete function in some circumstances. Some options will maximize not suffering but fail to maximize freedom and vice versa.
If anyone has different terminal values, I’d like to hear more about that.
If you were looking for people here with different values, see above (though I don’t know how much we differ). But note that the people here are going to have heavy overlap on values for semi-obvious reasons. But there are people out there who assign intrinsic moral relevance to national borders, race, religion, sexual purity, tradition etc. Do you still deny that?
It seems to me that the terminal values you list are really just means to an end, and that the end in question is similar to my own—i.e. some combination of minimizing harm and maximizing freedom (to put it in terms which are a bit of an oversimplification).
For example: I also favor ethical pluralism (I’m not sure what “particularism” is), for the reasons that it leads to a more vibrant and creative society, whilst the opposite (which I guess would be suppressing or discouraging any but some “dominant” culture) leads to completely unnecessary suffering.
You are right that maximizing two values is not necessarily solvable. The apparaent duality of the goal as stated has more to do with the shortcomings of natural language than it does with the goals being contradictory. If you could assign numbers to “suffering” (S) and “individual freedom” (F), I would think that the goal would be to maximize aS + bF for some values of a and b which have yet to be worked out.
[Addendum: this function may be oversimplifying things as well; there may be one or more nonlinear functions applied to S and/or F before they are added. What I said below about the possible values of a and b applies also to these functions. A better statement of the overall function would probably be fa(S) + fb(F), where fa() and fb() are both—I would think—positively-sloped for all input values.]
[Edit: ACK! Got confused here; the function for S would be negative, i.e. we want less suffering.]
[Another edit in case anyone is still reading this comment for the first time: I don’t necessarily count “death” as non-suffering; I suppose this means “suffering” isn’t quite the right word, but I don’t have another one handy]
The exact values of a and b may vary from person to person—perhaps they even are the primary attributes which account for one’s political predispositions—but I would like to see an argument that there is some other desirable end goal for society, some other term which belongs in this equation.
...there are people out there who assign intrinsic moral relevance to national borders, race, religion, sexual purity, tradition etc. Do you still deny that?
I do not deny this, but I also do not believe they are being rational in those assignments. Why should the “morality” of a particular act matter in the slightest if it has been shown to be completely harmless?
For example: I also favor ethical pluralism (I’m not sure what “particularism” is), for the reasons that it leads to a more vibrant and creative society, whilst the opposite (which I guess would be suppressing or discouraging any but some “dominant” culture) leads to completely unnecessary suffering.
This is my fault. I don’t mean multiculturalism or politcal pluralism. I really do mean pluralism about terminal values. By particularism I mean that there are no moral principles and that the right action is entirely circumstance dependent. Note that I’m not actually a particularist since I did give you moral principles. I would say that I am a value pluralist.
It seems to me that the terminal values you list are really just means to an end, and that the end in question is similar to my own—i.e. some combination of minimizing harm and maximizing freedom (to put it in terms which are a bit of an oversimplification).
But I’m explicitly denying this. For example, I am a cosmopolitan. In your discussion with Matt you’ve said that for now you care about helping poor Americans, not the rest of the world. But this is totally antithetical to my terminal values. I would vastly prefer to spend political and economic capital to get rid of agricultural subsides in the developed world, liberalize as many immigration and trade laws as I can and test strategies for economic development. Whether or not the American working class has cheap health care really is quite insignificant to me by comparison.
Now, when I say I have a terminal value of fairness I really do mean it. I mean I would sacrifice utility or increase overall suffering in some circumstances in order to make the world more fair. I would do the same to make the world more free and the same to make the world more honest in some situations. I would do things that furthered the happiness of my friends and family but increased your suffering (nothing personal). I don’t know what gives you reason to deny any of this.
I do not deny this, but I also do not believe they are being rational in those assignments. Why should the “morality” of a particular act matter in the slightest if it has been shown to be completely harmless?
Now you’re just begging the question. My whole point this entire time is that there is no reason for morality to always be about harm. Indeed, there is no reason for morality to ever be about harm except that we make it so. I frankly don’t even understand the application of the word “rationality” as we use it hereto values. Unless you have a third meaning for the word your usage here is just a category error!
By particularism I mean that there are no moral principles and that the right action is entirely circumstance dependent.
So how do you rationally decide if an action is right or wrong? -- or are you saying you can’t do this?
Also, just to be clear: you are saying that you do not believe rightness or wrongness of an action ultimately derives from whether or not it does harm? (“Harm” being the more common term; I tried to refine it a bit as “personally-defined suffering”, but I think you’re disagreeing with the larger idea—not my refinement of it.)
In your discussion with Matt you’ve said that for now you care about helping poor Americans, not the rest of the world.
Matt (I believe) misinterpreted me that way too. No, that is not what I said.
What I was trying to convey was that I thought I had a workable and practical principle by which poor Americans could be helped (redistribution of American wealth via mechanisms and rules yet to be worked out), while I don’t have such a solution for the rest of the world [yet].
I tried to make it quite clear that I do care about the rest of the world; the fact that I don’t yet have a solution for them (and am therefore not offering one) does not negate this.
I also tried to make it quite clear that my solution for Americans must not come at the price of harming others in the world, and that (further) I believe that as long as it avoids this, it may be of some benefit to the rest of the world as we will not be allowing unused resources to languish in the hands of the very richest people (who really don’t need them) -- leaving the philanthropists among us free to focus on poverty worldwide rather than domestically.
(At a glance, I agree with your global policy position. I don’t think it contradicts my own. I’m not talking about reallocation of existing expenditures—foreign aid, tax revenues, etc. -- I’m talking about reallocating unused—one might even use the word “hoarded”—resources, via means socialistic, capitalistic, or whatever means seems best*.)
(*the definition of this slippery term comes back ultimately to what we’re discussing here: “what is good?”)
Now you’re just begging the question. My whole point this entire time is that there is no reason for morality to always be about harm. Indeed, there is no reason for morality to ever be about harm except that we make it so. I frankly don’t even understand the application of the word “rationality” as we use it here to values. Unless you have a third meaning for the word your usage here is just a category error!
First of all, when I say “harm” or “suffering”, I’m not talking about something like “punishing someone for bad behavior”; the idea behind doing that (whether correct or not) is that this ultimately benefits them somehow, and any argument over such punishment will be based on whether harm or good is being done overall. “Hitting a masochist” would not necessarily qualify as harm, especially if you will stop when the masochist asks you to.
Second… when we look at harm or benefit, we have to look at the system of people affected. This isn’t to say that if {one person in the system benefits more than another is harmed} then it’s ok, because then we get into the complexity of what I’ll call the “benefit aggregation function”—which involves values that probably are individual.
It’s also reasonable (and often necessary) to look at a decision’s effects on society (if you let one starving person get away with stealing a cookie under a particular circumstance, then other hungry people may think it’s always okay to steal cookies) in the present and in the long term. This is the basis of many arguments against gay marriage, for example—the idea that society will somehow be harmed—and hence individuals will be harmed as society crumbles around them—by “changing the definition of marriage”. (The evidence is firmly against those arguments, but that’s not the point.)
Third: I’m arguing that “[avoiding] harm” is the ultimate basis for all empathetic-human arguments about morality, and I suggest that this would be true for any successful social species (not just humans). (by which I mean “humans with empathy”—specifically excluding psychopaths and other people whose primary motive is self-gratification)
I suggest that If you can’t argue that an action causes harm of some kind, you have absolutely no basis for claiming the action is wrong (within the context of discussions with other humans or social sophonts).
You seem to be arguing, however, that actions can be wrong without causing any demonstrable harm. Can you give an example?
So how do you rationally decide if an action is right or wrong? -- or are you saying you can’t do this?
There is no such thing as “rationally deciding if an action is right or wrong”. This has nothing to do with particularism. It’s just a metaethical position. I don’t know what can be rational or irrational about morality.
Again though, I’m not a particularist, I do have principles I can apply if I don’t have strong intuitions. A particularist only has her intuitions.
Also, just to be clear: you are saying that you do not believe rightness or wrongness of an action ultimately derives from whether or not it does harm? (“Harm” being the more common term; I tried to refine it a bit as “personally-defined suffering”, but I think you’re disagreeing with the larger idea—not my refinement of it.)
I don’t believe my own morality can be reduced to language about harm. I’m not sure what “ultimately derives” means but I suspect my answer is no. My morality happens to have a lot to do with harm (again, I’m a Haidtian liberal). But I don’t think that makes my morality more rational than a morality that is less about harm. There is no such thing a “rational” or “irrational” morality only moralities I find silly or abhorrent.
I tried to make it quite clear that I do care about the rest of the world; the fact that I don’t yet have a solution for them (and am therefore not offering one) does not negate this.
If it’s the case that you care about the rest of the world then I don’t think you realize how non-ideal your prescriptions are. You’re basically advocating for redistributing wealth from part of the global upper class to part of the global middle class and ignoring those experiencing the most pain and the most injustice.
I also tried to make it quite clear that my solution for Americans must not come at the price of harming others in the world, and that (further) I believe that as long as it avoids this, it may be of some benefit to the rest of the world as we will not be allowing unused resources to languish in the hands of the very richest people (who really don’t need them) -- leaving the philanthropists among us free to focus on poverty worldwide rather than domestically.
But of course it comes at the price of harming the rest of the world. You’re advocating sacrificing political resources to pass legislation. Those resources are to some extent limited which means you’re decreasing the chances of or at least delaying changes in policy which would actually benefit the poorest. Moreover, social entitlements are notoriously impossible to overturn which means you’re putting all this capital in a place we can’t take it from to give to the people who really need it. Shoot, at least the mega-rich are sometimes using their money to invest in developing countries.
This doesn’t even get us into preventing existential risk. When ever you have a utility-like morality using resources inefficiently is about as bad as actively doing harm.
You seem to be arguing, however, that actions can be wrong without causing any demonstrable harm. Can you give an example?
None you’ll agree with! You’ve already said your morality is about preventing harm! But like it or not there are people who really don’t care about suffering outside their own country. There are people who thing gay marriage is wrong no matter what effects it has on society (just as there are those, like me, who think it should be legal even if it damages society). There are those who do not believe we should criticize our leader under certain circumstances. There are those who believe our elders deserve respect above and beyond what they deserve as humans. There are those who believe sex outside of marriage is wrong. There are those who believe eating cow is immoral; there are others who believe eating cow is delicious. None of these people are necessarily rational or irrational.
I’ll reiterate one question: What do you mean by rational in “rational morality”?
You’re basically advocating for redistributing wealth from part of the global upper class to part of the global middle class and ignoring those experiencing the most pain and the most injustice.
I’ve explained repeatedly—perhaps not in this subthread, so I’ll reiterate—that I’m only proposing reallocating domestic resources within the US, not resources which would otherwise be spent on foreign aid of any kind. I don’t see how that can be harmful to anyone except (possibly) the extremely rich people from whom the resources are being reallocated.
(Will respond to your other points in separate comments, to maximize topic-focus of any subsequent discussion.)
So how do you rationally decide if an action is right or wrong? -- or are you saying you can’t do this?
I don’t know what can be rational or irrational about morality.
This is taken out of context, but I must take issue with it. If you can decide whether an action is right or wrong, then that decision can be made rationally, for any decent definition of ‘rationality’ that is about decisions.
So if you want to claim, “One cannot rationally decide whether an action is right or wrong”, that reduces to “One cannot decide whether an action is right or wrong”. In that case, would it be because your decisions can’t affect your beliefs, or because there is objective morality, or some other reason?
I’m not sure I understand your issue. If this response doesn’t work you may have to reexplain.
If you have some values—say happiness—then there can be irrational ways of evaluating actions in terms of those values. So if I’m concerned with happiness but only look at the effects of the action on my sneakers, and not the emotions of people, well that seems irrational if happiness is really what I care about. Certainly there are actions which can either be consistent or inconsistent with some set of values and taking actions that are inconsistent with your values is irrational. What I don’t see is what it could mean for those values to be rational or irrational in the first place. I don’t think people “decide” on terminal values in the way they decide on breakfast or to give to some charity over another.
Internal terminal values don’t have to be rational—but external ones (goals for society) do, and need to take individual ones into account. Violating an individual internal TV causes suffering, which violates my proposed universal external TV.
For instance… if I’m a heterosexual male, then one of my terminal values might be to form a pair-bond with a female of my species. That’s an internal terminal value. This doesn’t mean that I think everyone should do this; I can still support gay rights. “Supporting gay rights” is an external value, but not a terminal one for me. For a gay person, it probably would be a terminal value—so prohibiting gays from marrying would be violating their internal terminal values, which causes suffering, which violates my proposed universal external terminal value of “minimizing suffering / maximizing happiness”—and THAT is why it is wrong to prohibit gays from marrying, not because I personally happen to think it is wrong (i.e. not because of my external intermediate value of supporting gay rights).
I’m fine with that distinction but it doesn’t change my point. Why do external terminal values have to be rational? What does it mean for a value to be rational?
I finally figured out what was going on, and fixed it. For some reason it got posted in “drafts” instead of on the site, and looking at the post while logged in gave no clue that this was the case.
I don’t think any bumper sticker successfully encapsulates my terminal values. I’m highly sympathetic to ethical pluralism and particularism. I value fairness and happiness (politically I’m a cosmopolitan Rawlsian liberal) with additional values of freedom and honesty which under certain conditions can trump fairness and happiness. I also value the existence of what I would recognize as humanity and limiting the possibility of the destruction of humanity can sometimes trump all of the above. Values weighted toward myself, my family and friends. It’s possible all of these things could be reduced to more fundamental values, I’m not sure. There are cases where I have no good procedure for evaluating which outcome is more desirable.
It is worth noting, if you think these are rationally justifiable somehow, that maximizing two different values is going to leave with an incomplete function in some circumstances. Some options will maximize not suffering but fail to maximize freedom and vice versa.
If you were looking for people here with different values, see above (though I don’t know how much we differ). But note that the people here are going to have heavy overlap on values for semi-obvious reasons. But there are people out there who assign intrinsic moral relevance to national borders, race, religion, sexual purity, tradition etc. Do you still deny that?
It seems to me that the terminal values you list are really just means to an end, and that the end in question is similar to my own—i.e. some combination of minimizing harm and maximizing freedom (to put it in terms which are a bit of an oversimplification).
For example: I also favor ethical pluralism (I’m not sure what “particularism” is), for the reasons that it leads to a more vibrant and creative society, whilst the opposite (which I guess would be suppressing or discouraging any but some “dominant” culture) leads to completely unnecessary suffering.
You are right that maximizing two values is not necessarily solvable. The apparaent duality of the goal as stated has more to do with the shortcomings of natural language than it does with the goals being contradictory. If you could assign numbers to “suffering” (S) and “individual freedom” (F), I would think that the goal would be to maximize aS + bF for some values of a and b which have yet to be worked out.
[Addendum: this function may be oversimplifying things as well; there may be one or more nonlinear functions applied to S and/or F before they are added. What I said below about the possible values of a and b applies also to these functions. A better statement of the overall function would probably be fa(S) + fb(F), where fa() and fb() are both—I would think—positively-sloped for all input values.]
[Edit: ACK! Got confused here; the function for S would be negative, i.e. we want less suffering.]
[Another edit in case anyone is still reading this comment for the first time: I don’t necessarily count “death” as non-suffering; I suppose this means “suffering” isn’t quite the right word, but I don’t have another one handy]
The exact values of a and b may vary from person to person—perhaps they even are the primary attributes which account for one’s political predispositions—but I would like to see an argument that there is some other desirable end goal for society, some other term which belongs in this equation.
I do not deny this, but I also do not believe they are being rational in those assignments. Why should the “morality” of a particular act matter in the slightest if it has been shown to be completely harmless?
This is my fault. I don’t mean multiculturalism or politcal pluralism. I really do mean pluralism about terminal values. By particularism I mean that there are no moral principles and that the right action is entirely circumstance dependent. Note that I’m not actually a particularist since I did give you moral principles. I would say that I am a value pluralist.
But I’m explicitly denying this. For example, I am a cosmopolitan. In your discussion with Matt you’ve said that for now you care about helping poor Americans, not the rest of the world. But this is totally antithetical to my terminal values. I would vastly prefer to spend political and economic capital to get rid of agricultural subsides in the developed world, liberalize as many immigration and trade laws as I can and test strategies for economic development. Whether or not the American working class has cheap health care really is quite insignificant to me by comparison.
Now, when I say I have a terminal value of fairness I really do mean it. I mean I would sacrifice utility or increase overall suffering in some circumstances in order to make the world more fair. I would do the same to make the world more free and the same to make the world more honest in some situations. I would do things that furthered the happiness of my friends and family but increased your suffering (nothing personal). I don’t know what gives you reason to deny any of this.
Now you’re just begging the question. My whole point this entire time is that there is no reason for morality to always be about harm. Indeed, there is no reason for morality to ever be about harm except that we make it so. I frankly don’t even understand the application of the word “rationality” as we use it here to values. Unless you have a third meaning for the word your usage here is just a category error!
So how do you rationally decide if an action is right or wrong? -- or are you saying you can’t do this?
Also, just to be clear: you are saying that you do not believe rightness or wrongness of an action ultimately derives from whether or not it does harm? (“Harm” being the more common term; I tried to refine it a bit as “personally-defined suffering”, but I think you’re disagreeing with the larger idea—not my refinement of it.)
Matt (I believe) misinterpreted me that way too. No, that is not what I said.
What I was trying to convey was that I thought I had a workable and practical principle by which poor Americans could be helped (redistribution of American wealth via mechanisms and rules yet to be worked out), while I don’t have such a solution for the rest of the world [yet].
I tried to make it quite clear that I do care about the rest of the world; the fact that I don’t yet have a solution for them (and am therefore not offering one) does not negate this.
I also tried to make it quite clear that my solution for Americans must not come at the price of harming others in the world, and that (further) I believe that as long as it avoids this, it may be of some benefit to the rest of the world as we will not be allowing unused resources to languish in the hands of the very richest people (who really don’t need them) -- leaving the philanthropists among us free to focus on poverty worldwide rather than domestically.
(At a glance, I agree with your global policy position. I don’t think it contradicts my own. I’m not talking about reallocation of existing expenditures—foreign aid, tax revenues, etc. -- I’m talking about reallocating unused—one might even use the word “hoarded”—resources, via means socialistic, capitalistic, or whatever means seems best*.)
(*the definition of this slippery term comes back ultimately to what we’re discussing here: “what is good?”)
First of all, when I say “harm” or “suffering”, I’m not talking about something like “punishing someone for bad behavior”; the idea behind doing that (whether correct or not) is that this ultimately benefits them somehow, and any argument over such punishment will be based on whether harm or good is being done overall. “Hitting a masochist” would not necessarily qualify as harm, especially if you will stop when the masochist asks you to.
Second… when we look at harm or benefit, we have to look at the system of people affected. This isn’t to say that if {one person in the system benefits more than another is harmed} then it’s ok, because then we get into the complexity of what I’ll call the “benefit aggregation function”—which involves values that probably are individual.
It’s also reasonable (and often necessary) to look at a decision’s effects on society (if you let one starving person get away with stealing a cookie under a particular circumstance, then other hungry people may think it’s always okay to steal cookies) in the present and in the long term. This is the basis of many arguments against gay marriage, for example—the idea that society will somehow be harmed—and hence individuals will be harmed as society crumbles around them—by “changing the definition of marriage”. (The evidence is firmly against those arguments, but that’s not the point.)
Third: I’m arguing that “[avoiding] harm” is the ultimate basis for all empathetic-human arguments about morality, and I suggest that this would be true for any successful social species (not just humans). (by which I mean “humans with empathy”—specifically excluding psychopaths and other people whose primary motive is self-gratification)
I suggest that If you can’t argue that an action causes harm of some kind, you have absolutely no basis for claiming the action is wrong (within the context of discussions with other humans or social sophonts).
You seem to be arguing, however, that actions can be wrong without causing any demonstrable harm. Can you give an example?
There is no such thing as “rationally deciding if an action is right or wrong”. This has nothing to do with particularism. It’s just a metaethical position. I don’t know what can be rational or irrational about morality.
Again though, I’m not a particularist, I do have principles I can apply if I don’t have strong intuitions. A particularist only has her intuitions.
I don’t believe my own morality can be reduced to language about harm. I’m not sure what “ultimately derives” means but I suspect my answer is no. My morality happens to have a lot to do with harm (again, I’m a Haidtian liberal). But I don’t think that makes my morality more rational than a morality that is less about harm. There is no such thing a “rational” or “irrational” morality only moralities I find silly or abhorrent.
If it’s the case that you care about the rest of the world then I don’t think you realize how non-ideal your prescriptions are. You’re basically advocating for redistributing wealth from part of the global upper class to part of the global middle class and ignoring those experiencing the most pain and the most injustice.
But of course it comes at the price of harming the rest of the world. You’re advocating sacrificing political resources to pass legislation. Those resources are to some extent limited which means you’re decreasing the chances of or at least delaying changes in policy which would actually benefit the poorest. Moreover, social entitlements are notoriously impossible to overturn which means you’re putting all this capital in a place we can’t take it from to give to the people who really need it. Shoot, at least the mega-rich are sometimes using their money to invest in developing countries.
This doesn’t even get us into preventing existential risk. When ever you have a utility-like morality using resources inefficiently is about as bad as actively doing harm.
None you’ll agree with! You’ve already said your morality is about preventing harm! But like it or not there are people who really don’t care about suffering outside their own country. There are people who thing gay marriage is wrong no matter what effects it has on society (just as there are those, like me, who think it should be legal even if it damages society). There are those who do not believe we should criticize our leader under certain circumstances. There are those who believe our elders deserve respect above and beyond what they deserve as humans. There are those who believe sex outside of marriage is wrong. There are those who believe eating cow is immoral; there are others who believe eating cow is delicious. None of these people are necessarily rational or irrational.
I’ll reiterate one question: What do you mean by rational in “rational morality”?
I’ve explained repeatedly—perhaps not in this subthread, so I’ll reiterate—that I’m only proposing reallocating domestic resources within the US, not resources which would otherwise be spent on foreign aid of any kind. I don’t see how that can be harmful to anyone except (possibly) the extremely rich people from whom the resources are being reallocated.
(Will respond to your other points in separate comments, to maximize topic-focus of any subsequent discussion.)
This is taken out of context, but I must take issue with it. If you can decide whether an action is right or wrong, then that decision can be made rationally, for any decent definition of ‘rationality’ that is about decisions.
So if you want to claim, “One cannot rationally decide whether an action is right or wrong”, that reduces to “One cannot decide whether an action is right or wrong”. In that case, would it be because your decisions can’t affect your beliefs, or because there is objective morality, or some other reason?
I’m not sure I understand your issue. If this response doesn’t work you may have to reexplain.
If you have some values—say happiness—then there can be irrational ways of evaluating actions in terms of those values. So if I’m concerned with happiness but only look at the effects of the action on my sneakers, and not the emotions of people, well that seems irrational if happiness is really what I care about. Certainly there are actions which can either be consistent or inconsistent with some set of values and taking actions that are inconsistent with your values is irrational. What I don’t see is what it could mean for those values to be rational or irrational in the first place. I don’t think people “decide” on terminal values in the way they decide on breakfast or to give to some charity over another.
Does that address your concern?
See my comment about “internal” and “external” terminal values—I think possibly that’s where we’re failing to communicate.
Internal terminal values don’t have to be rational—but external ones (goals for society) do, and need to take individual ones into account. Violating an individual internal TV causes suffering, which violates my proposed universal external TV.
For instance… if I’m a heterosexual male, then one of my terminal values might be to form a pair-bond with a female of my species. That’s an internal terminal value. This doesn’t mean that I think everyone should do this; I can still support gay rights. “Supporting gay rights” is an external value, but not a terminal one for me. For a gay person, it probably would be a terminal value—so prohibiting gays from marrying would be violating their internal terminal values, which causes suffering, which violates my proposed universal external terminal value of “minimizing suffering / maximizing happiness”—and THAT is why it is wrong to prohibit gays from marrying, not because I personally happen to think it is wrong (i.e. not because of my external intermediate value of supporting gay rights).
I’m fine with that distinction but it doesn’t change my point. Why do external terminal values have to be rational? What does it mean for a value to be rational?
Can you just answer those two questions?
Here’s my answer, finally… or a more complete answer, anyway.
It’s not visible, I think you have to publish it.
I finally figured out what was going on, and fixed it. For some reason it got posted in “drafts” instead of on the site, and looking at the post while logged in gave no clue that this was the case.
Sorry about that!