I would consider 3 to be a few.
Weedlayer
Do you feel confident that you could recognize a Bitcoin-like opportunity if one did appear, distinguishing it from countless other unlikely investments which go bust?
You should definitely post the entire quote here, not just the snippet with a link to the quote. For a moment I thought the one sentence was the entire quote, and nearly downvoted it for being trite.
While the quote is anti-rationality, it IS satirical, so I suppose it’s fine.
I’m fairly confident it stands for “Society for Creative Anachronism”.
Too strong.
Nobody EVER got successful from luck? Not even people born billionaires or royalty?
Nobody can EVER be happy without using intelligence? Only if you’re using some definition of happiness that includes a term like “Philosophical fulfillment” or some such, which makes the issue tautological.
The quote always annoyed me too. People bring it up for ANY infringement on liberty, often leaving off the words “Essential” and “Temporary”, making a much stronger version of the quote (And of course, obviously wrong).
Tangentially, Sword of Good was my introduction to Yudkowsky, and by extension, LW.
The tricky part is the “achievable levels of accuracy”. It would be possible for, say Galileo to invent general relativity using the orbit of mercury, probably. But from a pebble, you would need VERY precise measurements, to an absurd level.
Honestly, I did read the source, and it’s very difficult to get anything useful out of it. The closest I could interpret it is “Theory (In what? Political Science?) had become removed from “Other fields” (In political science? Science?)”.
In general, if context is needed to interpret the quote (I.E. It doesn’t stand on it’s own), it’s good to mention that context in the post, rather than just linking to a source and expecting people to follow a comment thread to understand it.
Sorry if this is overly critical, that was not my intention. I just don’t get what the “internecine conflict” you are referring to is.
I’m not really getting anything from this other than “Mainstream philosophy, boo! Empiricism, yeah!”
Is there anything more to this post?
EV (Shot) = -$90 EV (No Shot) = -$104
Difference (Getting the shot minus not getting it) = -$90 - (-$104) = $14
Therefore, get the shot.
The first two values are in the tree. The difference can be figured out by mental arithmetic.
Would that be altruistic value? If I’m not mistaken, the cost of blood donation is generally just time, and the benefit is to other people. I have heard infrequent blood donation might be a health benefit, but I don’t know much about that.
Well, if you don’t value your health at all, then this seems valid.
I have already gotten a flu shot this year, primarily because the cost of getting one is approximately 10 minutes and 0 USD (They’re covered by cost of attendance at my university and in a very convenient location for me).
Also more than have died from UFAI. Clearly that’s not worth worrying over either.
I’m not terrified of Ebola because it’s been demonstrated to be controllable in fairly developed counties, but as a general rule this quote seems incredibly out of place on less wrong. People here discuss the dangers of things which have literally never happened before almost every day.
My moral position different from (in fact, diametrically opposed to) Alice’s, but I’m not going to say that Alice’s morals are wrong
You do realize she’s implicitly calling you complicit in the perpetuation of the suffering and deaths of millions of animals right? I’m having difficulty understanding how you can NOT say that her morality is wrong. Her ACTIONS are clearly unobjectionable (Eating plants is certainly not worse than eating meat under the vast majority of ethical systems) but her MORALITY is quite controversial. I have a feeling like you accept this case because she is not doing anything that violates your own moral system, while you are doing something that violates hers. To use a (possibly hyperbolic and offensive) analogy, this is similar to a case where a murderer calls the morals of someone who doesn’t accept murder as “just different”, and something they have the full right to have.
No, I don’t think so. (and following text)
I don’t think your example works. He values success, AND he values other things (family, companionship, ect.) I’m not sure why you’re calling different values “Different sides” as though they are separate agents. We all have values that occasionally conflict. I value a long life, even biological immortality if possible (I know, what am I doing on lesswrong with a value like that? /sarcasm), but I wouldn’t sacrifice 1000 lives a day to keep me alive atop a golden throne. This doesn’t seem like a case of my “Don’t murder” side wanting me to value immortality less, it’s more a case of considering the expected utility of my actions and coming to a conclusion about what collateral damage I’m willing to accept. It’s a straight calculation, no value readjustment required.
As for your last point, I’ve never experienced such a radical change (I was raised religiously, but outside of weekly mass my family never seemed to take it very seriously and I can’t remember caring too much about it). I actually don’t know what makes other people adopt ideologies. For me, I’m a utilitarian because it seems like a logical way to formalize my empathy and altruistic desires, and to this day I have difficulty grokking deontology like natural law theology (you would think being raised catholic would teach you some of that. It did not).
So, to summarize my ramblings: I think your first example only LOOKS like reasonable disagreement because Alice’s actions are unobjectionable to you, and you would feel differently if positions were reversed. I think your example of different sides is really just explaining different values, which have to be weighed against each other but need not cause moral distress. And I have no idea what to make of your last point.
If I ignored or misstated any of your points, or am just completely talking over you and not getting the point at all, please let me know.
There’s no law of physics that talks about morality, certainly. Morals are derived from the human brain though, which is remarkably similar between individuals. With the exception of extreme outliers, possibly involving brain damage, all people feel emotions like happiness, sadness, pain and anger. Shouldn’t it be possible to judge most morality on the basis of these common features, making an argument like “wanton murder is bad, because it goes against the empathy your brain evolved to feel, and hurts the survival chance you are born valuing”? I think this is basically the point EY makes about the “psychological unity of humankind”.
Of course, this dream goes out the window with UFAI and aliens. Lets hope we don’t have to deal with those.
This is a somewhat frustrating situation, where we both seem to agree on what morality is, but are talking over each other. I’ll make two points and see if they move the conversation forward:
1: “There’s no reason to consider your own value system to be the very best there is”
This seems to be similar to the point I made above about acknowledging on an intellectual level that my (factual) beliefs aren’t the absolute best there is. The same logic holds true for morals. I know I’m making some mistakes, but I don’t know where those mistakes are. On any individual issue, I think I’m right, and therefore logically if someone disagrees with me, I think they’re wrong. This is what I mean by “thinking that one’s own morals are the best”. I know I might not be right on everything, but I think I’m right about every single issue, even the ones I might really be wrong about. After all, if I was wrong about something, and I was also aware of this fact, I would simply change my beliefs to the right thing (assuming the concept is binary. I have many beliefs I consider to be only approximations, which I consider to be only the best of any explanation I have heard so far. Not prefect, but “least wrong”).
Which brings me to point 2.
2: “Also don’t forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were.”
I’m absolutely confused as to what this means. To me, a moral belief and a factual belief are approximately equal, at least internally (if I’ve been equivocating between the two, that’s why). I know I can’t alter my moral beliefs on a whim, but that’s because I have no reason to want to. Consider self-modifying to want to murder innocents. I can’t do this, primarily because I don’t want to, and CAN’T want to for any conceivable reason (what reason does Gandhi have to take the murder pill if he doesn’t get a million dollars?) I suppose modifying instrumental values to terminal values (which morals are) to enhance motivation is a possible reason, but that’s an entirely different can of worms. If I wished I held certain moral beliefs, I already have them. After all, morality is just saying “You should do X”. So wishing I had a different morality is like saying “I wish I though I should do X”. What does that mean?
Not being who you wish to be is an issue of akrasia, not morality. I consider the two to be separate issues, with morality being an issue of beliefs and akrasia being an issue of motivation.
In short, I’m with you for the first line and two following paragraphs, and then you pull a conclusion out in the next paragraph that I disagree with. Clearly there’s a discontinuity either in my reading or your writing.
What basis do you have for judging others morality other than your own morality? And if you ARE using your own morality to judge their morality, aren’t you really just checking for similarity to your own?
I mean, it’s the same way with beliefs. I understand not everything I believe is true, and I thus understand intellectually that someone else might be more correct (or, less wrong, if you will) than me. But in practice, when I’m evaluating others’ beliefs I basically compare them with how similar they are to my own. On a particularly contentious issue, I consider reevaluating my beliefs, which of course is more difficult and involved, but for simple judgement I just use comparison.
Which of course is similar to the argument people sometimes bring up about “moral progress”, claiming that a random walk would look like progress if it ended up where we are now (that is, progress is defined as similarity to modern beliefs).
My question though is that how do you judge morality/behavior if not through your own moral system? And if that is how you do it, how is your own morality not necessarily better?
Edit: I misunderstood what you said by “rationalize”, sorry.
As Polymath said, rationalization means “To try to justify an irrational position later”″, basically making excuses.
Anyway, I wouldn’t worry about the downvotes, based on this post the people downvoting you probably weren’t being passive aggressive, but rather misinterpreted what you posted. It can take a little while to learn the local beliefs and jargon.