You have shown that an argument can be made that given a number of seemingly dissimilar, long term goals, e can make arguments which convincingly argue that to achieve them one should act in a manner people would generally consider moral. I am not convinced squirrel morality gives me an answer on specific moral questions (abortion say) but I can see how one might manage it. You have yet to convince me that short term bases will do the same: I am reasonably confident that many wil not. To claim theses bases as inferior seems to be begging the question to me.
As to your specific question: how about a basis of wanting to prevent liberalism? It would certainly be difficult to achieve and counter productive, but to claim that those respective properties are bad begs the question: you need morality to condemn purposes which are going to cause nothing but pain for all involved.
how about a basis of wanting to prevent liberalism?
If you were just to destroy the world, or build a static society and die of a meteor strike one day b/c your science never advanced, then life could evolve on another planet.
You need enough science and other things to be able to affect the whole universe. And for that you need liberalism temporarily. Then at the very very end, when you’re powerful enough to easily do whatever you want to the whole universe (needs to be well within your power, not at the limits of your power, or it’s too risky, you might fail) then finally you can destroy or control everything.
So that goal leads straight to billions of years of liberalism. And that does mean freedom of abortion: ruining people’s lives to punish them for having sex does not make society wealthier, does not promote progress, etc… But does increase the risk of everyone dying of meteor before you advance enough to deal with such a problem.
short term bases
Accomplish short term things, in general, depends on principles. Suppose I want a promotion at work within the next few years. It’s important to have the right kind of philosophy. I’ll have a better shot at it if I think well. So I’ll end up engaging with some big ideas. Not every single short term basis will lead somewhere interesting. If it’s really short, it’s not so important. Also consider this: we can conjecture that life is nice. People cannot use short term bases, which don’t connect to big ideas, to criticize this. If they want to criticize it, they will have to engage with some big ideas, so then we get liberalism again.
Dealing with issues in order. OK, fine, once again you’ve taken a bases that I’ve given and assumed I want it to apply to the entire universe (note this isn’t necessarily what most people actually mean. Just because I want humans to be happy doesn’t necessarily mean I want a universe tiled with happy humans), but even under this assumption I’m not sure I agree- by encouraging liberalism in the short term we may make it impossible to create liberalism in the long term, and you are imagining a society which is human in nature. Humans like liberalism, as a rule, but to say that therefore morality needs liberalism is actually subjective on humans. If I invent a species of blergs who love illiberalism then I can get away with it. Bear in mind that an illiberal species isn’t THAT hard to imagine- we suppose democracy is stable despite liberal societies being destroyed by more liberal ones. You make an assumption of stability based on the past 300 years or so of history, which seems somewhat presumptive.
I actually agree that given sensible starting assumptions we can get to something that looks like morality, or at least argue strongly in its favour, but those bases have no reason outside of themselves to be accepted. They are axioms, and axioms are by necessity subjective. We can look at them and say “hey those seem sensible” and “hey, those lead to results that jibe with my intuitions”, but we can’t really defend them as inherent rules. Look at Eliezer’s three worlds collide, with the Baby Eaters. While I disagree with many of the conclusions of that story, the evolution of the Baby Eaters doesn’t sound totally implausible, but theres a society thats developed a morality utterly at odds with our own.
On short term bases, I can obviously invent short term bases that don’t work. You claim that my murderer is worried about my resurrection. Most aren’t, and its easy to just say they want me to die once, and don’t care heavily if I resurrect afterwards. If I do, their desire will already have been fufilled and they will be sated. This person is weird and irrational, and there are multiple sensible reasons for us to make sure that person does not accomplish their goals, but to claim their goal is worse than ours inherently assumes a number of goals that that individual doesn’t possess.
OK, fine, once again you’ve taken a bases that I’ve given and assumed I want it to apply to the entire universe
We have different priorities.
What I want is: if people want to improve, then they can. There is an available method they can use, such as taking seriously their ideas and fully applying them instead of arbitrarily restricting them.
Most murderers don’t worry about resurrection. Yes, but I don’t mind. The point is a person with a murder type of basis has a way out starting with his existing values.
I think what you want is not possible methods of progress people could use if they wanted to, but some kind of guarantee. But there can’t be one. For one thing, no matter what you come up with people could simply refuse to accept your arguments.
They can refuse to accept mine too. I don’t care. My interest is that they can improve if they like. That’s enough.
There doesn’t have to be a way to force a truth on everyone (other than, you know, guns) for it to be an objective truth.
Bear in mind that an illiberal species isn’t THAT hard to imagine
They are easy to imagine. But they are only temporary. They always go extinct because they cannot deal with all the unforeseen problems they counter.
You make an assumption of stability based on the past 300 years or so of history, which seems somewhat presumptive.
No. You made an assumption about my reasoning. I certainly didn’t say that. You just guessed it. If you’d asked my reasoning that isn’t what I would have said.
Mm, I wonder if we are potentially arguing about the same thing here. I suspect our constructions of morality would look very similar at the end of the day, and that the word “objective” is getting in our way. I still don’t see how one can possibly construct a morality which exists outside minds in a real way, as morality is a ffunction of sentience.
morality is answers to questions about how to live. it is not a “function of sentience”.
the theory “morality not is objective” means that for any non-ambiguous question, there are multiple equally good answers.
an example of a non-ambiguous question is, “which computer should i buy today, if any, given my situation and background knowledge, and indeed given the entire state of the universe if it’s relevant”.
morality being objective means if a different person got into an identical situation (it only has to actually be the same in the relevant ways which are limited), the answer would be the same for him, not magically change.
so far this doesn’t have a lot of substance. yet it is what objectivity means. subjectivity is a dumb theory which advocates magic and whim.
the reason objectivity is important (besides for rejecting subjective denials that moral arguments can apply to anyone who doesn’t feel like letting them) is that when you consider lots of objective moral answers (in full detail the way i was saying) you find: there are common themes across multiple answers and many things are irrelevant (so, common themes across all questions in categories specified by only a small number of details). some explanations hold, and govern the answers, across many different moral questions. those are important and objective moral truths. when we learn them, they don’t just help us once but can be re-used to help with other choices later.
It is a convention here to use capital letters at the start of sentences. Either as a mere stuffy traditional baggage inherited from English grammar or because it makes it easier to parse the flow of a paragraph at a glance.
The problem is solved by software, in the form of the “Vote down” and “Reply” buttons, which allow to eventually correct things like that.
(More seriously, having the site automatically add proper capitalization to what people write would be awful, even if it was worth adding a feature for the tiny minority of people who can’t find their “shift” key)
The problem is solved by software, in the form of the “Vote down” and “Reply” buttons, which allow to eventually correct things like that.
I like it. (Because I had written the exact same thing myself and only refrained from posting it because curi had tipped my fairly sensitive ‘troll’ meter which invokes my personal injunction against feeding with replies.)
This website has such high standards that I would have felt totally out of line if I’d offered a frank opinion on the credibility/crankiness of our visitor.
But I guess what’s awesome about the karma system is that it removes any need to ‘descend to the personal level’. No need to drive people away with mockery or insults.
Following the rules of English is solely the writer’s responsibility. Some text input methods, such as onscreen keyboards for cell phones, will do capitalization for you. They will sometimes get it wrong, though, so you have to override them, and this is inconvenient enough that people who’re used to using the shift key don’t want autocapitalization. For example, variable names stay in lower-case if they’re at the start of a sentence, and some periods represent abbreviations rather than the ends of sentences.
More to the point, though, proper capitalization, punctuation, spelling and grammar are signals that reveal how fluent a writer is in English, and whether they’ve proofread. Comments that don’t follow the basic rules of English can be dismissed more readily, because writers still struggling with language are usually struggling with concepts too, and a comment that hasn’t been proofread probably hasn’t been checked for logical errors either.
Following the rules of English is solely the writer’s responsibility.
It seems to me you have a moral theory that people should have to work hard, and be punished for failing to conform to convention, even though, if you want to read it a particular way you could solve that in software without bothering me. If it’s a convention here in particular, as I was told, then software support could be added to the website instead of used by individual readers. You’re irrationally objecting to dissident or “lazy” behavior on principle, and you don’t want to solve the problem in a way which is nicer people you think should be forced to change. This is an intolerant and illiberal view.
Your plan of inferring whether I proofread, or whether I am fluent with English, from my use or not of capitalization, is rather flawed. I often proofread and don’t edit capitalization. But you don’t care about that. The important thing to you is the moral issue, not that your semi-factual arguments are false.
Bear in mind that an illiberal species isn't THAT hard to imagine
They are easy to imagine. But they are only temporary. They always go extinct because they cannot deal with all the unforeseen problems they counter.
What evidence do you have for this claim? This isn’t at all obvious to me. The only highly sapient species we encounter are humans. And homo sapiens aren’t terribly liberal. Do you have examples of other species that are intrinsically illiberal that have gone extinct?
without progress, you don’t get advanced science. that means eventually you die to a super nova explosion or meteor or something. how could it be otherwise?
without progress, you don’t get advanced science. that means eventually you die to a super nova explosion or meteor or something. how could it be otherwise?
You may need to think carefully about what you mean by illiberal and progress. You also may want to consider why an illiberal species can’t construct new technologies as needed to deal with threats.
You have shown that an argument can be made that given a number of seemingly dissimilar, long term goals, e can make arguments which convincingly argue that to achieve them one should act in a manner people would generally consider moral. I am not convinced squirrel morality gives me an answer on specific moral questions (abortion say) but I can see how one might manage it. You have yet to convince me that short term bases will do the same: I am reasonably confident that many wil not. To claim theses bases as inferior seems to be begging the question to me.
As to your specific question: how about a basis of wanting to prevent liberalism? It would certainly be difficult to achieve and counter productive, but to claim that those respective properties are bad begs the question: you need morality to condemn purposes which are going to cause nothing but pain for all involved.
If you were just to destroy the world, or build a static society and die of a meteor strike one day b/c your science never advanced, then life could evolve on another planet.
You need enough science and other things to be able to affect the whole universe. And for that you need liberalism temporarily. Then at the very very end, when you’re powerful enough to easily do whatever you want to the whole universe (needs to be well within your power, not at the limits of your power, or it’s too risky, you might fail) then finally you can destroy or control everything.
So that goal leads straight to billions of years of liberalism. And that does mean freedom of abortion: ruining people’s lives to punish them for having sex does not make society wealthier, does not promote progress, etc… But does increase the risk of everyone dying of meteor before you advance enough to deal with such a problem.
Accomplish short term things, in general, depends on principles. Suppose I want a promotion at work within the next few years. It’s important to have the right kind of philosophy. I’ll have a better shot at it if I think well. So I’ll end up engaging with some big ideas. Not every single short term basis will lead somewhere interesting. If it’s really short, it’s not so important. Also consider this: we can conjecture that life is nice. People cannot use short term bases, which don’t connect to big ideas, to criticize this. If they want to criticize it, they will have to engage with some big ideas, so then we get liberalism again.
Dealing with issues in order. OK, fine, once again you’ve taken a bases that I’ve given and assumed I want it to apply to the entire universe (note this isn’t necessarily what most people actually mean. Just because I want humans to be happy doesn’t necessarily mean I want a universe tiled with happy humans), but even under this assumption I’m not sure I agree- by encouraging liberalism in the short term we may make it impossible to create liberalism in the long term, and you are imagining a society which is human in nature. Humans like liberalism, as a rule, but to say that therefore morality needs liberalism is actually subjective on humans. If I invent a species of blergs who love illiberalism then I can get away with it. Bear in mind that an illiberal species isn’t THAT hard to imagine- we suppose democracy is stable despite liberal societies being destroyed by more liberal ones. You make an assumption of stability based on the past 300 years or so of history, which seems somewhat presumptive.
I actually agree that given sensible starting assumptions we can get to something that looks like morality, or at least argue strongly in its favour, but those bases have no reason outside of themselves to be accepted. They are axioms, and axioms are by necessity subjective. We can look at them and say “hey those seem sensible” and “hey, those lead to results that jibe with my intuitions”, but we can’t really defend them as inherent rules. Look at Eliezer’s three worlds collide, with the Baby Eaters. While I disagree with many of the conclusions of that story, the evolution of the Baby Eaters doesn’t sound totally implausible, but theres a society thats developed a morality utterly at odds with our own.
On short term bases, I can obviously invent short term bases that don’t work. You claim that my murderer is worried about my resurrection. Most aren’t, and its easy to just say they want me to die once, and don’t care heavily if I resurrect afterwards. If I do, their desire will already have been fufilled and they will be sated. This person is weird and irrational, and there are multiple sensible reasons for us to make sure that person does not accomplish their goals, but to claim their goal is worse than ours inherently assumes a number of goals that that individual doesn’t possess.
We have different priorities.
What I want is: if people want to improve, then they can. There is an available method they can use, such as taking seriously their ideas and fully applying them instead of arbitrarily restricting them.
Most murderers don’t worry about resurrection. Yes, but I don’t mind. The point is a person with a murder type of basis has a way out starting with his existing values.
I think what you want is not possible methods of progress people could use if they wanted to, but some kind of guarantee. But there can’t be one. For one thing, no matter what you come up with people could simply refuse to accept your arguments.
They can refuse to accept mine too. I don’t care. My interest is that they can improve if they like. That’s enough.
There doesn’t have to be a way to force a truth on everyone (other than, you know, guns) for it to be an objective truth.
They are easy to imagine. But they are only temporary. They always go extinct because they cannot deal with all the unforeseen problems they counter.
No. You made an assumption about my reasoning. I certainly didn’t say that. You just guessed it. If you’d asked my reasoning that isn’t what I would have said.
Mm, I wonder if we are potentially arguing about the same thing here. I suspect our constructions of morality would look very similar at the end of the day, and that the word “objective” is getting in our way. I still don’t see how one can possibly construct a morality which exists outside minds in a real way, as morality is a ffunction of sentience.
morality is answers to questions about how to live. it is not a “function of sentience”.
the theory “morality not is objective” means that for any non-ambiguous question, there are multiple equally good answers.
an example of a non-ambiguous question is, “which computer should i buy today, if any, given my situation and background knowledge, and indeed given the entire state of the universe if it’s relevant”.
morality being objective means if a different person got into an identical situation (it only has to actually be the same in the relevant ways which are limited), the answer would be the same for him, not magically change.
so far this doesn’t have a lot of substance. yet it is what objectivity means. subjectivity is a dumb theory which advocates magic and whim.
the reason objectivity is important (besides for rejecting subjective denials that moral arguments can apply to anyone who doesn’t feel like letting them) is that when you consider lots of objective moral answers (in full detail the way i was saying) you find: there are common themes across multiple answers and many things are irrelevant (so, common themes across all questions in categories specified by only a small number of details). some explanations hold, and govern the answers, across many different moral questions. those are important and objective moral truths. when we learn them, they don’t just help us once but can be re-used to help with other choices later.
It is a convention here to use capital letters at the start of sentences. Either as a mere stuffy traditional baggage inherited from English grammar or because it makes it easier to parse the flow of a paragraph at a glance.
Why don’t you just solve the problem in software instead of whining about it? It’s not hard.
The problem is solved by software, in the form of the “Vote down” and “Reply” buttons, which allow to eventually correct things like that.
(More seriously, having the site automatically add proper capitalization to what people write would be awful, even if it was worth adding a feature for the tiny minority of people who can’t find their “shift” key)
I like it. (Because I had written the exact same thing myself and only refrained from posting it because curi had tipped my fairly sensitive ‘troll’ meter which invokes my personal injunction against feeding with replies.)
Agreed.
This website has such high standards that I would have felt totally out of line if I’d offered a frank opinion on the credibility/crankiness of our visitor.
But I guess what’s awesome about the karma system is that it removes any need to ‘descend to the personal level’. No need to drive people away with mockery or insults.
why would it be awful to have (optional) software support for “a convention here”?
Following the rules of English is solely the writer’s responsibility. Some text input methods, such as onscreen keyboards for cell phones, will do capitalization for you. They will sometimes get it wrong, though, so you have to override them, and this is inconvenient enough that people who’re used to using the shift key don’t want autocapitalization. For example, variable names stay in lower-case if they’re at the start of a sentence, and some periods represent abbreviations rather than the ends of sentences.
More to the point, though, proper capitalization, punctuation, spelling and grammar are signals that reveal how fluent a writer is in English, and whether they’ve proofread. Comments that don’t follow the basic rules of English can be dismissed more readily, because writers still struggling with language are usually struggling with concepts too, and a comment that hasn’t been proofread probably hasn’t been checked for logical errors either.
It seems to me you have a moral theory that people should have to work hard, and be punished for failing to conform to convention, even though, if you want to read it a particular way you could solve that in software without bothering me. If it’s a convention here in particular, as I was told, then software support could be added to the website instead of used by individual readers. You’re irrationally objecting to dissident or “lazy” behavior on principle, and you don’t want to solve the problem in a way which is nicer people you think should be forced to change. This is an intolerant and illiberal view.
Your plan of inferring whether I proofread, or whether I am fluent with English, from my use or not of capitalization, is rather flawed. I often proofread and don’t edit capitalization. But you don’t care about that. The important thing to you is the moral issue, not that your semi-factual arguments are false.
I have been trolled. I have lost. I will have a nice day anyways.
I like your attitude, son!
What evidence do you have for this claim? This isn’t at all obvious to me. The only highly sapient species we encounter are humans. And homo sapiens aren’t terribly liberal. Do you have examples of other species that are intrinsically illiberal that have gone extinct?
without progress, you don’t get advanced science. that means eventually you die to a super nova explosion or meteor or something. how could it be otherwise?
You may need to think carefully about what you mean by illiberal and progress. You also may want to consider why an illiberal species can’t construct new technologies as needed to deal with threats.
Urgh, typing this on my phone is less than fun. My point is pretty much finished though