Here are some (mostly critical) notes I made reading the post. Hope it helps you figure things out.
> * If you can’t say something nice, don’t say anything at all.
This is such bad advice (I realise eapache is giving it as an example of common moral sayings; though I note it’s neither an aphorism nor a phrase). Like maybe it applies when talking to a widow at her husbands funeral?
”You’re going to fast”, “you’re hurting me”, “your habit of overreaching hurts your ability to learn”, etc. These are good things to say in the right context, and not saying them allows bad things to keep happening.
> The manosphere would happily write me off as a “beta male”, and I’m sure Jordan Peterson would have something weird to say about lobsters and serotonin.
I don’t know why this is in here, particularly the second clause—I’m not sure it helps with anything. It’s also mean.
> This combination of personality traits makes …
The last thing you talk about is what Peterson might say, not your own personality. Sounds like you’re talking about the personality trait(s) of “[having] something weird to say about lobsters and serotonin”.
> This combination of personality traits makes Postel’s Principle a natural fit for defining my own behaviour.
I presume you mean “guiding” more than “defining”. It could define standards you hold for your own behaviour.
> *[People who know me IRL will point out that in fact I am pretty judgemental a lot of the time. But I try and restrict my judginess … to matters of objective efficiency, where empirical reality will back me up, and avoid any kind of value-based judgement. E.g. I will judge you for being an ineffective, inconsistent feminist, but never for holding or not holding feminist values.]*
This is problematic, e.g. ‘I will judge you for being an ineffective, inconsistent nazi, but never for holding or not holding nazi values’. Making moral judgements is important. That said, judging things poorly is (possibly very) harmful. (Examples: treating all moral inconsistencies as equally bad, or treating some racism as acceptable b/c of the target race)
> annoyingly large number of people
I think it’s annoyingly few. A greater population is generally good.
> There is clearly no set of behaviours I could perform that will satisfy all of them, so I focus on applying Postel’s Principle to the much smaller set of people who are in my “social bubble”
How do you know you aren’t just friends with people who approve of this?
What do you do WRT everyone else? (e.g. shop-keeps, the mailman, taxi drivers)
> If I’m not likely to interact with you soon, or on a regular basis, then I’m relatively free to ignore your opinion.
Are you using Postel’s Principle *solely* for approval? (You say “The more people who like me, the more secure my situation” earlier, but is there another reason?)
> Talking about the “set” of people on whom to apply Postel’s Principle provides a nice segue into the formal definitions that are implicit in the English aphorism.
How can formal definitions be implicit?
Which aphorism? You provided 5 things you called aphorisms, but you haven’t called Postel’s Principle that.
> … [within the context of your own behaviour] something is only morally permissible for me if it is permissible for *all* of the people I am likely to interact with regularly.
What about people you are friends with for particular purposes? Example: a friend you play tennis with but wouldn’t introduce to your parents.
What if one of those people decides that Postel’s Principle is not morally permissible?
> … [within the context of other ppl’s behaviour] it is morally permissible if it is permissible for any of the people I am likely to interact with regularly.
You’re basing your idea on which things are generally morally permissible on what other people think. (Note: you do acknowledge this later which is good)
This cannot deal with contradictions between people’s moral views (a case where neither of those people necessarily have contradictions, but you do).
It also isn’t an idea that works in isolation. Other people might have moral views b/c they have principles from which they derive those views. They could be mistaken about the principles or their application. In such a case would you—even if you realised they were mistaken—still hold their views as permissible? How is that rational?
> Since the set of actions that are considered morally permissible for me are defined effectively by my social circle, it becomes of some importance to intentionally manage my social circle.
This is a moral choice, by what moral knowledge can you make such a choice? I presume you see how using Postel’s Principle here might lead you into a recursive trap (like an echo-chamber), and how it limits your ability to error correct if something goes wrong. Ultimately you’re not in control of what your social circle becomes (or who’s in and who’s out).
> It would be untenable to make such different friends and colleagues that the intersection of their acceptable actions shrinks to nothing.
What? Why?
Your using of ‘untenable’ is unclear; is it just impractical but something you’d do if it were practical, or is it unthinkable to do so, or is it just so difficult it would never happen? (Note: I think option 3 is not true, btw)
> (since inaction is of course its own kind of action)
It’s good you realise this.
> In that situation I would be forced to make a choice (since inaction is of course its own kind of action) and jettison one group of friends in order to open up behavioural manoeuvring space again.
I can see the logic of why you’d *want* to do this, but I can’t see *how* you’d do it. Also, I don’t see why you’d care to if it wasn’t causing problems. I have friends and associates I value which I’d have to cut-loose if I were to follow Postel’s P. That would harm me, so how could it be moral to?
It would harm you too, unless the friends are a) collectively and individually not very meaningful (but then why be friends at all?) or b) not providing value to your life anyway (so why be friends at all?). Maybe there are other options?
> Unfortunately, it sometimes happens that people change their moral stances, …
Why is this a bad thing!??!? It’s **good** to learn you were wrong and improve your values to reflect that.
You expand the above with “especially when under pressure from other people who I may not be interacting with directly”—I’d argue that’s not *necessarily* changing one’s preference, it’s just that the person is behaving like that to please someone else. Hard to see why that would matter unless it was like within the friend group itself or impacted the person so much that they couldn’t spend time with you (the second example being something that happens alongside moral pressure with e.g. domestic abuse, so might be something to seriously consider).
> tomorrow one of my friends could decide they’re suddenly a radical Islamist and force me with a choice.
You bring up a decent problem with your philosophy, but then say:
> While in some sense “difficult”, many of these choices end up being rather easy; I have no interest in radical Islam, and so ultimately how close I was to this friend relative to the rest of my social circle matters only in the very extreme case where they were literally my only acquaintance worth speaking of.
First, “many” is not “all” so you still have undefined behaviour (like what to do in these situations). Secondly, who cares if you have an interest in radical islam? A friend of yours suddenly began adhering to a pro-violence anti-reason philosophy. I don’t think you need Postel’s P. to know you don’t want to casually hang with them again.
So I think this is a bad example for two reasons: 1. You dismiss the problem because “many of these choices end up being rather easy”, but that’s a bad reason to dismiss it, and I really hope many of those choices are not because a friend has recently decided terrorism might be a good hobby. 2. If you do it just b/c you don’t have an interest that doesn’t cover all cases, but more importantly to do so for that reason is to reject deeper moral explanations. How do you know you’re “on the right side of history” if you can’t judge it and refuse the available moral knowledge we have?
> Again unfortunately, it sometimes happens that large groups of people change their moral stances all at once. … This sort of situation also forces me with a choice, and often a much more difficult one. … If I expect a given moral meme to become dominant over the next decade, it seems prudent to be “on the right side of history” regardless of the present impact on my social circle.
I agree that you shouldn’t take your friends’ moral conclusions into account when thinking about big societal stuff. But the thing about the “right side of history” is that you can’t predict it. Take the US civil war—with your Postel’s P. inspired morality, your judgements would depend on which state you were in. Leading up to things you’d probably judge the dominant local view the one that would endure. If you didn’t judge the situation like that, it means you would have used some other moral knowledge that isn’t part of Postel’s P.
> However what may be worse than any clean break is the moment just before, trying to walk the knife edge of barely-overlapping morals in the desperate hope that the centre can hold.
I agree, that sounds like a very uncomfortable situation.
> Even people who claim to derive their morality from first principles often end up with something surprisingly close to their local social consensus.
Why is this not by design? I think it’s natural for ppl to mostly agree with their friend group on particular moral judgements (moral explanations can be a whole different ball game). I don’t think Postel’s P. need be involved.
Additionally: social dynamics are such that a group can be very *restrictive* in regards to what’s acceptable, and often treat harshly those members who are too liberal in what they accept. (Think Catholics in like the 1600s or w/e)
> If some data means two different things to different parts of your program or network, it can be exploited—Interoperability is achieved at the expense of security.
Is something like *moral security* important to you? Maybe it’s moot because you don’t have anyone trying to maliciously manipulate you, but worth thinking about if you hold the keys to any accounts, servers, etc.
> The paper, and other work and talks from the LANGSEC group, outlines a manifesto for language based security—be precise and consistent in the face of ambiguity
Here tef (the author) points out that preciseness and consistency (e.g. having and adhering to well formed specs) are a way to avoid the bad things about Postel’s P. Do you agree with this? Are your own moral views “precise and consistent”?
> Instead of just specifying a grammar, you should specify a parsing algorithm, including the error correction behaviour (if any).
This is good, and I think applies to morality: you should be able to handle any moral situation, know the “why” behind any decision you make, and know how you avoid errors in moral judgements/reasoning.
Note: “any moral situation” is fine for me to say here b/c “don’t make judgements on extreme or wacky moral hypotheticals” can be part of your moral knowledge.
Here are some (mostly critical) notes I made reading the post. Hope it helps you figure things out.
> * If you can’t say something nice, don’t say anything at all.
This is such bad advice (I realise eapache is giving it as an example of common moral sayings; though I note it’s neither an aphorism nor a phrase). Like maybe it applies when talking to a widow at her husbands funeral?
”You’re going to fast”, “you’re hurting me”, “your habit of overreaching hurts your ability to learn”, etc. These are good things to say in the right context, and not saying them allows bad things to keep happening.
> The manosphere would happily write me off as a “beta male”, and I’m sure Jordan Peterson would have something weird to say about lobsters and serotonin.
I don’t know why this is in here, particularly the second clause—I’m not sure it helps with anything. It’s also mean.
> This combination of personality traits makes …
The last thing you talk about is what Peterson might say, not your own personality. Sounds like you’re talking about the personality trait(s) of “[having] something weird to say about lobsters and serotonin”.
> This combination of personality traits makes Postel’s Principle a natural fit for defining my own behaviour.
I presume you mean “guiding” more than “defining”. It could define standards you hold for your own behaviour.
> *[People who know me IRL will point out that in fact I am pretty judgemental a lot of the time. But I try and restrict my judginess … to matters of objective efficiency, where empirical reality will back me up, and avoid any kind of value-based judgement. E.g. I will judge you for being an ineffective, inconsistent feminist, but never for holding or not holding feminist values.]*
This is problematic, e.g. ‘I will judge you for being an ineffective, inconsistent nazi, but never for holding or not holding nazi values’. Making moral judgements is important. That said, judging things poorly is (possibly very) harmful. (Examples: treating all moral inconsistencies as equally bad, or treating some racism as acceptable b/c of the target race)
> annoyingly large number of people
I think it’s annoyingly few. A greater population is generally good.
> There is clearly no set of behaviours I could perform that will satisfy all of them, so I focus on applying Postel’s Principle to the much smaller set of people who are in my “social bubble”
How do you know you aren’t just friends with people who approve of this?
What do you do WRT everyone else? (e.g. shop-keeps, the mailman, taxi drivers)
> If I’m not likely to interact with you soon, or on a regular basis, then I’m relatively free to ignore your opinion.
Are you using Postel’s Principle *solely* for approval? (You say “The more people who like me, the more secure my situation” earlier, but is there another reason?)
> Talking about the “set” of people on whom to apply Postel’s Principle provides a nice segue into the formal definitions that are implicit in the English aphorism.
How can formal definitions be implicit?
Which aphorism? You provided 5 things you called aphorisms, but you haven’t called Postel’s Principle that.
> … [within the context of your own behaviour] something is only morally permissible for me if it is permissible for *all* of the people I am likely to interact with regularly.
What about people you are friends with for particular purposes? Example: a friend you play tennis with but wouldn’t introduce to your parents.
What if one of those people decides that Postel’s Principle is not morally permissible?
> … [within the context of other ppl’s behaviour] it is morally permissible if it is permissible for any of the people I am likely to interact with regularly.
You’re basing your idea on which things are generally morally permissible on what other people think. (Note: you do acknowledge this later which is good)
This cannot deal with contradictions between people’s moral views (a case where neither of those people necessarily have contradictions, but you do).
It also isn’t an idea that works in isolation. Other people might have moral views b/c they have principles from which they derive those views. They could be mistaken about the principles or their application. In such a case would you—even if you realised they were mistaken—still hold their views as permissible? How is that rational?
> Since the set of actions that are considered morally permissible for me are defined effectively by my social circle, it becomes of some importance to intentionally manage my social circle.
This is a moral choice, by what moral knowledge can you make such a choice? I presume you see how using Postel’s Principle here might lead you into a recursive trap (like an echo-chamber), and how it limits your ability to error correct if something goes wrong. Ultimately you’re not in control of what your social circle becomes (or who’s in and who’s out).
> It would be untenable to make such different friends and colleagues that the intersection of their acceptable actions shrinks to nothing.
What? Why?
Your using of ‘untenable’ is unclear; is it just impractical but something you’d do if it were practical, or is it unthinkable to do so, or is it just so difficult it would never happen? (Note: I think option 3 is not true, btw)
> (since inaction is of course its own kind of action)
It’s good you realise this.
> In that situation I would be forced to make a choice (since inaction is of course its own kind of action) and jettison one group of friends in order to open up behavioural manoeuvring space again.
I can see the logic of why you’d *want* to do this, but I can’t see *how* you’d do it. Also, I don’t see why you’d care to if it wasn’t causing problems. I have friends and associates I value which I’d have to cut-loose if I were to follow Postel’s P. That would harm me, so how could it be moral to?
It would harm you too, unless the friends are a) collectively and individually not very meaningful (but then why be friends at all?) or b) not providing value to your life anyway (so why be friends at all?). Maybe there are other options?
> Unfortunately, it sometimes happens that people change their moral stances, …
Why is this a bad thing!??!? It’s **good** to learn you were wrong and improve your values to reflect that.
You expand the above with “especially when under pressure from other people who I may not be interacting with directly”—I’d argue that’s not *necessarily* changing one’s preference, it’s just that the person is behaving like that to please someone else. Hard to see why that would matter unless it was like within the friend group itself or impacted the person so much that they couldn’t spend time with you (the second example being something that happens alongside moral pressure with e.g. domestic abuse, so might be something to seriously consider).
> tomorrow one of my friends could decide they’re suddenly a radical Islamist and force me with a choice.
You bring up a decent problem with your philosophy, but then say:
> While in some sense “difficult”, many of these choices end up being rather easy; I have no interest in radical Islam, and so ultimately how close I was to this friend relative to the rest of my social circle matters only in the very extreme case where they were literally my only acquaintance worth speaking of.
First, “many” is not “all” so you still have undefined behaviour (like what to do in these situations). Secondly, who cares if you have an interest in radical islam? A friend of yours suddenly began adhering to a pro-violence anti-reason philosophy. I don’t think you need Postel’s P. to know you don’t want to casually hang with them again.
So I think this is a bad example for two reasons:
1. You dismiss the problem because “many of these choices end up being rather easy”, but that’s a bad reason to dismiss it, and I really hope many of those choices are not because a friend has recently decided terrorism might be a good hobby.
2. If you do it just b/c you don’t have an interest that doesn’t cover all cases, but more importantly to do so for that reason is to reject deeper moral explanations. How do you know you’re “on the right side of history” if you can’t judge it and refuse the available moral knowledge we have?
> Again unfortunately, it sometimes happens that large groups of people change their moral stances all at once. … This sort of situation also forces me with a choice, and often a much more difficult one. … If I expect a given moral meme to become dominant over the next decade, it seems prudent to be “on the right side of history” regardless of the present impact on my social circle.
I agree that you shouldn’t take your friends’ moral conclusions into account when thinking about big societal stuff. But the thing about the “right side of history” is that you can’t predict it. Take the US civil war—with your Postel’s P. inspired morality, your judgements would depend on which state you were in. Leading up to things you’d probably judge the dominant local view the one that would endure. If you didn’t judge the situation like that, it means you would have used some other moral knowledge that isn’t part of Postel’s P.
> However what may be worse than any clean break is the moment just before, trying to walk the knife edge of barely-overlapping morals in the desperate hope that the centre can hold.
I agree, that sounds like a very uncomfortable situation.
> Even people who claim to derive their morality from first principles often end up with something surprisingly close to their local social consensus.
Why is this not by design? I think it’s natural for ppl to mostly agree with their friend group on particular moral judgements (moral explanations can be a whole different ball game). I don’t think Postel’s P. need be involved.
Additionally: social dynamics are such that a group can be very *restrictive* in regards to what’s acceptable, and often treat harshly those members who are too liberal in what they accept. (Think Catholics in like the 1600s or w/e)
----
I think the programmingisterrible post is good.
> If some data means two different things to different parts of your program or network, it can be exploited—Interoperability is achieved at the expense of security.
Is something like *moral security* important to you? Maybe it’s moot because you don’t have anyone trying to maliciously manipulate you, but worth thinking about if you hold the keys to any accounts, servers, etc.
> The paper, and other work and talks from the LANGSEC group, outlines a manifesto for language based security—be precise and consistent in the face of ambiguity
Here tef (the author) points out that preciseness and consistency (e.g. having and adhering to well formed specs) are a way to avoid the bad things about Postel’s P. Do you agree with this? Are your own moral views “precise and consistent”?
> Instead of just specifying a grammar, you should specify a parsing algorithm, including the error correction behaviour (if any).
This is good, and I think applies to morality: you should be able to handle any moral situation, know the “why” behind any decision you make, and know how you avoid errors in moral judgements/reasoning.
Note: “any moral situation” is fine for me to say here b/c “don’t make judgements on extreme or wacky moral hypotheticals” can be part of your moral knowledge.