So I’ll try a different approach—instead of giving it a go myself again, I’ll simply ask, what do /you/ think a good LW post about liberty, freedom, and fundamental human rights would look like?
The problem with writing about these concepts directly is that they’re (a) very hard to define, and (b) applause lights. So while everyone agrees that they’re good and important, few people agree what they are. In order to write a meaningful post about “freedom”, you have to get specific and talk about “freedom to do X”—and in that case you’re usually better off talking about X without the applause light. When people try to talk about freedom, liberty and/or human rights from a thousand-mile-high perspective, without zooming in on a specific and concrete example, they end up missing all the important distinctions and getting hopelessly confused.
Thank you for that reply—it was cogent, descriptive, and helps me figure out what I can try doing next.
(Eg, maybe something along the lines of “Man is a rational animal—he doesn’t use claws or poison to survive, he uses his brain. In order /to/ use his brain most effectively, he has to be able to do certain things—most fundamentally, he has to stay alive, and in order to do that, he has to X, Y, and Z; in order to come up with new ideas to know better how to stay alive, he has to be able to discuss ideas freely; etc, etc, etc.”)
It’s not at all clear to me that if people are primarily concerned with staying alive, we should be preserving their liberty to discuss ideas freely; reasonably competent authorities passing restrictions can keep people quite safe without providing them with many liberties at all. In fact, if I really wanted to design a society optimized for keeping people alive, it would probably look rather like a prison system.
The question you should be asking yourself is not “what justifies my package of political beliefs,” but “what do I think people really want out of society, and how do I optimize for that?”
The question you should be asking yourself is not “what justifies my package
of political beliefs,” but “what do I think people really want out of society, and
how do I optimize for that?”
How about, “What do I think /I/ want out of society, and how do I optimize for that?”?
How about, “What do I think /I/ want out of society, and how do I optimize for that?”?
In theory that might be the best way of going about things, but if it doesn’t generalize well to other people, you’re unlikely to get others on board with it, which limits the usefulness of framing the question that way.
But surely DataPacRat’s is the correct question. (Of course, if what DataPacRat really desires is that other people get what they want, then it’s hard to distinguish the questions.) Once answered, and in the optimisation phase, then we consider how best to frame discussions of the issues to convince other people that they want it too (or whatever is most effective).
I wonder if it’s possible to try to resolve the difference between the two. (I remember reading about something called ‘desire utilitarianism’ which, IIRC, was focused on reconciling such matters.)
Man is a rational animal—he doesn’t use claws or poison to survive
Why is survival one of your goals (“I want it.” is an acceptable answer, but you have to accept that you might only want it due to being misinformed; even if it is probably correct, it it extremely unlikely that all your desires would be retained in your reflective equilibrium.)? Is it your only goal? Why?
he uses his brain.
Intelligence may be our comparative advantage over other animals, but we’re not trading with them. Brain are useful because they solve problems, not because they’re our skill as a species.
In order /to/ use his brain most effectively, he has to be able to do certain things—most fundamentally, he has to stay alive
If survival infringes on your other desires it becomes counterproductive. Beware lost purposes. Even if this doesn’t hold, maximizing your probability of survival is not the same as maximizing whatever you actually prefer to maximize. If you only focus on survival, you risk giving up everything (or everything else if you value survival in itself—I don’t think I do, but I’m very uncertain) for a slightly increased lifespan.
Why is survival one of your goals (“I want it.” is an acceptable answer, but you
have to accept that you might only want it due to being misinformed; even if it
is probably correct, it it extremely unlikely that all your desires would be
retained in your reflective equilibrium.)? Is it your only goal? Why?
At the moment, my primary goal is the continued existence of sapience. Partly it is due to the fact that since purpose and meaning aren’t inherent qualities of anything, but are projected onto things by sapient minds, and I want my existence to have had some meaning, then in order to do that, sapients have to continue to exist. Or, put another way, for just about /any/ goal I can seriously imagine myself wanting, the continued existence of sapience is a necessary prerequisite.
If survival infringes on your other desires it becomes counterproductive.
Beware lost purposes. Even if this doesn’t hold, maximizing your probability
of survival is not the same as maximizing whatever you actually prefer to
maximize. If you only focus on survival, you risk giving up everything (or
everything else if you value survival in itself—I don’t think I do, but I’m very
uncertain) for a slightly increased lifespan.
If I seriously come to the conclusion that my continued life has a measurable impact that /reduces/ the probability that sapience will continue to exist in the universe… then I honestly don’t know whether I’d choose personal death. For example, one of the goals I’ve imagined myself working for is “Life forever or die trying”, which, as usual, requires at least some sapience in the universe (if only myself), but… well, it’s a problem I hope never to have to encounter… and, fortunately, at present, I’m trying to use my existence to /increase/ the probability that sapience will continue to exist, so it’s unlikely I’ll never encounter that particular problem.
Partly it is due to the fact that since purpose and meaning aren’t inherent qualities of anything, but are projected onto things by sapient minds, and I want my existence to have had some meaning, then in order to do that, sapients have to continue to exist. Or, put another way, for just about /any/ goal I can seriously imagine myself wanting, the continued existence of sapience is a necessary prerequisite.
The two ways of putting it are not equivalent; it is possible for a sapient mind to decide that its purpose is to maximize the number of paperclips in the universe, which can be achieved without its continued existence. You probably realize this already though; the last quoted sentence makes sense.
I’m trying to use my existence to /increase/ the probability that sapience will continue to exist, so it’s unlikely I’ll never encounter that particular problem.
If you had a chance to preform an action that led to a slight risk to your life but increased the chance of sapience continuing to exist (in such a way as to lower your overall chance of living forever), would you do so? It is usually impossible to perfectly optimize for two different things at once; even if hey are mostly unopposed, near the maxima there will be tradeoffs.
If you had a chance to preform an action that led to a slight risk to your life
but increased the chance of sapience continuing to exist (in such a way
as to lower your overall chance of living forever), would you do so?
A good question.
I have at least one datum suggesting that the answer, for me in particular, is ‘yes’. I currently believe that what’s generally called ‘free speech’ is a strong supporting factor, if not necessary prerequisite, for developing the science we need to ensure sapience’s survival. Last year, there was an event, ‘Draw Muhammad Day’, to promote free speech; for which, before it actually happened, there was a non-zero probability that anyone participating in it would receive threats and potentially even violence from certain extremists. While that was still the calculation, I joined in. (I did get my very first death threats in response, but nothing came of them.)
You have evidence that you do, in fact, take such risks, but, unless you considered the issue very carefully, you don’t know whether you really want to do so. Section 1 of Yvain’s consequentialism FAQ covers the concept of not knowing and then determining when you really want. (The rest of the FAQ is also good but not directly relevant to this discussion and I think you might disagree with much of it.)
I know this is tangential, but what is it with libertarians and unnecessarily gendered language? I truly don’t mean that as a rhetorical question or an attack on you personally or any kind of specific political point, it’s something I’ve been sincerely curious about before and maybe you know the answer; why do so many (obviously not all) libertarian and Randian types seem to be so attached to the whole everyone-is-”man”/”he” schema, including the ones who are way too young to have lived in times before people started realizing why that was a bad idea? Proportionally, even social conservatives don’t seem to do that nearly as much anymore.
“Use gender-neutral language” is motivated by an egalitarian instinct, and is said by (moral) authorities—both are things libertarians don’t seem very fond of.
(I don’t identify very strongly as a libertarian, but can relate to the kneejerk reflex against being told what to do)
Also, some people might not phrase it as “people started realizing why that was a bad idea” but rather as “sanctimonious politically correct busybodies started telling everybody how to speak resulting in some horrible eyesores like he/she or ey all over the place”. I don’t really buy the second version , but I don’t think the first one is a fair description either (though it’s hard to judge from a French point of view, gender and grammar work a bit differently in French).
I suspect that it is due to emotional reactions against feeling like one is being told what to do. I don’t know what the correlation v. causation is in what comes first (the philosophical attitude leading to such emotional reactions or the emotional attitude making one more likely to accept a libertarian philosophical viewpoint). But given such an emotional reaction, one can easily see people going out of their way to avoid using the terminology that they might feel like they are being told to use.
That’s a good question—though I’m not sure I can think of a good answer.
I know that, in most of my writing, I tend to use ‘they’ as a gender-neutral third-person singular pronoun… when I wrote ‘Man is a rational animal, etc’, I was aware that I could have rephrased the whole thing to be gender-neutral… but when writing, I felt that it wouldn’t have provided the same feeling—short, sharp, direct, to-the-point. The capitalized term ‘Man’ is, for good or ill, shorter than the word ‘humanity’, and “Man is a rational animal” has a different sense about it than (I wanted to insert ‘the mealy-mouthed’ here, which isn’t a term I remember actually having used) “humans are rational creatures”.
There’s probably something Dark-Artish in there somewhere, though it wasn’t a conscious invocation thereof.
The problem with writing about these concepts directly is that they’re (a) very hard to define, and (b) applause lights. So while everyone agrees that they’re good and important, few people agree what they are. In order to write a meaningful post about “freedom”, you have to get specific and talk about “freedom to do X”—and in that case you’re usually better off talking about X without the applause light. When people try to talk about freedom, liberty and/or human rights from a thousand-mile-high perspective, without zooming in on a specific and concrete example, they end up missing all the important distinctions and getting hopelessly confused.
Thank you for that reply—it was cogent, descriptive, and helps me figure out what I can try doing next.
(Eg, maybe something along the lines of “Man is a rational animal—he doesn’t use claws or poison to survive, he uses his brain. In order /to/ use his brain most effectively, he has to be able to do certain things—most fundamentally, he has to stay alive, and in order to do that, he has to X, Y, and Z; in order to come up with new ideas to know better how to stay alive, he has to be able to discuss ideas freely; etc, etc, etc.”)
Is that really your analysis of human society from the ground up though, or did you try to figure out how to create a rational argument for liberty?
It’s not at all clear to me that if people are primarily concerned with staying alive, we should be preserving their liberty to discuss ideas freely; reasonably competent authorities passing restrictions can keep people quite safe without providing them with many liberties at all. In fact, if I really wanted to design a society optimized for keeping people alive, it would probably look rather like a prison system.
The question you should be asking yourself is not “what justifies my package of political beliefs,” but “what do I think people really want out of society, and how do I optimize for that?”
Not quite from the ground up; the version that /does/ start from the ground up is summarized in http://www.datapacrat.com/sketches/Rational01ink.jpg and http://www.datapacrat.com/sketches/Rational02ink.jpg .
How about, “What do I think /I/ want out of society, and how do I optimize for that?”?
In theory that might be the best way of going about things, but if it doesn’t generalize well to other people, you’re unlikely to get others on board with it, which limits the usefulness of framing the question that way.
But surely DataPacRat’s is the correct question. (Of course, if what DataPacRat really desires is that other people get what they want, then it’s hard to distinguish the questions.) Once answered, and in the optimisation phase, then we consider how best to frame discussions of the issues to convince other people that they want it too (or whatever is most effective).
I wonder if it’s possible to try to resolve the difference between the two. (I remember reading about something called ‘desire utilitarianism’ which, IIRC, was focused on reconciling such matters.)
Why is survival one of your goals (“I want it.” is an acceptable answer, but you have to accept that you might only want it due to being misinformed; even if it is probably correct, it it extremely unlikely that all your desires would be retained in your reflective equilibrium.)? Is it your only goal? Why?
Intelligence may be our comparative advantage over other animals, but we’re not trading with them. Brain are useful because they solve problems, not because they’re our skill as a species.
If survival infringes on your other desires it becomes counterproductive. Beware lost purposes. Even if this doesn’t hold, maximizing your probability of survival is not the same as maximizing whatever you actually prefer to maximize. If you only focus on survival, you risk giving up everything (or everything else if you value survival in itself—I don’t think I do, but I’m very uncertain) for a slightly increased lifespan.
At the moment, my primary goal is the continued existence of sapience. Partly it is due to the fact that since purpose and meaning aren’t inherent qualities of anything, but are projected onto things by sapient minds, and I want my existence to have had some meaning, then in order to do that, sapients have to continue to exist. Or, put another way, for just about /any/ goal I can seriously imagine myself wanting, the continued existence of sapience is a necessary prerequisite.
If I seriously come to the conclusion that my continued life has a measurable impact that /reduces/ the probability that sapience will continue to exist in the universe… then I honestly don’t know whether I’d choose personal death. For example, one of the goals I’ve imagined myself working for is “Life forever or die trying”, which, as usual, requires at least some sapience in the universe (if only myself), but… well, it’s a problem I hope never to have to encounter… and, fortunately, at present, I’m trying to use my existence to /increase/ the probability that sapience will continue to exist, so it’s unlikely I’ll never encounter that particular problem.
The two ways of putting it are not equivalent; it is possible for a sapient mind to decide that its purpose is to maximize the number of paperclips in the universe, which can be achieved without its continued existence. You probably realize this already though; the last quoted sentence makes sense.
If you had a chance to preform an action that led to a slight risk to your life but increased the chance of sapience continuing to exist (in such a way as to lower your overall chance of living forever), would you do so? It is usually impossible to perfectly optimize for two different things at once; even if hey are mostly unopposed, near the maxima there will be tradeoffs.
A good question.
I have at least one datum suggesting that the answer, for me in particular, is ‘yes’. I currently believe that what’s generally called ‘free speech’ is a strong supporting factor, if not necessary prerequisite, for developing the science we need to ensure sapience’s survival. Last year, there was an event, ‘Draw Muhammad Day’, to promote free speech; for which, before it actually happened, there was a non-zero probability that anyone participating in it would receive threats and potentially even violence from certain extremists. While that was still the calculation, I joined in. (I did get my very first death threats in response, but nothing came of them.)
You have evidence that you do, in fact, take such risks, but, unless you considered the issue very carefully, you don’t know whether you really want to do so. Section 1 of Yvain’s consequentialism FAQ covers the concept of not knowing and then determining when you really want. (The rest of the FAQ is also good but not directly relevant to this discussion and I think you might disagree with much of it.)
I know this is tangential, but what is it with libertarians and unnecessarily gendered language? I truly don’t mean that as a rhetorical question or an attack on you personally or any kind of specific political point, it’s something I’ve been sincerely curious about before and maybe you know the answer; why do so many (obviously not all) libertarian and Randian types seem to be so attached to the whole everyone-is-”man”/”he” schema, including the ones who are way too young to have lived in times before people started realizing why that was a bad idea? Proportionally, even social conservatives don’t seem to do that nearly as much anymore.
“Use gender-neutral language” is motivated by an egalitarian instinct, and is said by (moral) authorities—both are things libertarians don’t seem very fond of.
(I don’t identify very strongly as a libertarian, but can relate to the kneejerk reflex against being told what to do)
Also, some people might not phrase it as “people started realizing why that was a bad idea” but rather as “sanctimonious politically correct busybodies started telling everybody how to speak resulting in some horrible eyesores like he/she or ey all over the place”. I don’t really buy the second version , but I don’t think the first one is a fair description either (though it’s hard to judge from a French point of view, gender and grammar work a bit differently in French).
I suspect that it is due to emotional reactions against feeling like one is being told what to do. I don’t know what the correlation v. causation is in what comes first (the philosophical attitude leading to such emotional reactions or the emotional attitude making one more likely to accept a libertarian philosophical viewpoint). But given such an emotional reaction, one can easily see people going out of their way to avoid using the terminology that they might feel like they are being told to use.
That’s a good question—though I’m not sure I can think of a good answer.
I know that, in most of my writing, I tend to use ‘they’ as a gender-neutral third-person singular pronoun… when I wrote ‘Man is a rational animal, etc’, I was aware that I could have rephrased the whole thing to be gender-neutral… but when writing, I felt that it wouldn’t have provided the same feeling—short, sharp, direct, to-the-point. The capitalized term ‘Man’ is, for good or ill, shorter than the word ‘humanity’, and “Man is a rational animal” has a different sense about it than (I wanted to insert ‘the mealy-mouthed’ here, which isn’t a term I remember actually having used) “humans are rational creatures”.
There’s probably something Dark-Artish in there somewhere, though it wasn’t a conscious invocation thereof.
I’d guess it’s the gender split. It’s a doozy.
I think you’d see less gendered in libertarian journalism, where there are more women.