Before I edited it, it was like the current one with the second paragraph removed, the last two sentences of the third paragraph removed, and the third and fourth paragraph combined into one, roughly. I’m glad gwern posted his comment, though, because I think the post is much better now.
mszegedy
Are you sure you understood the point? I am highlighting a writing technique where you write the same short story over and over again slightly differently to convey a probabilistic model to the reader in a way that is interesting. HPMoR is not quite this; it’s a different story every time, with a different lesson every time, that is treated as a sequence of events.
He literally tells the same story over and over again, differently every time. He has several stories that he does this to. The book is a fictional autobiography; O’Brien was in the Vietnam War, and writes as though he were recollecting stories from the Vietnam War, but the stories are all made up. Here, I found an excerpt that illustrates the principle in a somewhat okay manner.
EDIT: Here, this is better (PDF warning).
If you want, read it. Hopefully, though, the principle that I was highlighting was clear, wasn’t it? While fiction with a probability distribution given for each sequence of events is boring, fiction with many short stories describing the different possible scenarios is interesting, and gives the same probabilistic model.
Should I give examples of how O’Brien does it? I don’t know how much I can type out without violating copyright law.
How Tim O’Brien gets around the logical fallacy of generalization from fictional evidence
I’ve found that going by significant digits helps.
“If I represented the date that Einstein came to the US with only one significant digit of precision, what would it be? Definitely 2000. What about two? Definitely 1900. What about three? Probably 1900 again; I’m willing to take that bet. But four digits of precision? I’m not sure at all. I’ll leave it as 1900.”
The answer came out way off, but hopefully it prevented any anchoring, and it also accurately represents my knowledge of Einstein (namely, I know which properties of physics he discovered, and I know that he wrote his most important papers in the earlier half of the 190Xs, which must have also been when he came to the US). In hindsight, I might have should have taken historical context into account (why would Einstein leave for the US in the first place? if I had considered this, my guess would probably have ended up as 1910 or 1920), but that’s hindsight bias or a lesson to be learned.
An improvement to this method might be that I explicitly consider the range of numbers that would make it come out as a significant digit (if the three-significant-digit number is 1900, then he came between 1895 and 1904; does that sound more plausible than him coming sometime between 1905 and 1914?). But this might just make the anchoring effect worse, or introduce some other bias.
That’s exactly what I can’t make my mind up about, and forces me to default to nihilism on things like that. Maybe it really is irrelevant where the pleasure comes from? If we did wirehead everyone for eternity, then would it be sad if everyone spontaneously disappeared at some point? Those are questions that I can’t answer. My morality is only good for today’s society, not tomorrow’s. I guess strictly morally, yes, wireheading is a solution, but philosophically, there are arguments to be made against it. (Not from a nihilistic point of view, though, which I am not comfortable with. I guess, philosophically, I can adopt two axioms: “Life requires meaning,” and “meaning must be created.” And then arises the question, “What is meaning?”, at which point I leave it to people with real degrees in philosophy. If you asked me, I’d try to relate it to the entropy of the universe somehow. But I feel that I’m really out of my depth at that point.)
True, I swear! I think I can summarize why I was so distraught: external factors, this was a trusted friend, also one of my only friends, and I was offended by related things they had said prior. I am seeking help, though.
You’re completely right. I tried, at first, to look for ways that it could be a true statement that “some areas shouldn’t have consistent belief systems attached”, but that made me upset or something (wtf, me?), so I abandoned that, and resolved to attack the argument, and accept it if I couldn’t find a fault with it. And that’s clearly bad practice for a self-proclaimed rarionalist! I’m ashamed. Well, I can sort of make the excuse of having experienced emotions, which made me forget my principles, but that’s definitely not good enough.
I will be more careful next time.
EDIT: Actually, I’m not sure whether it’s so cut-and-dry like that. I’ll admit that I ended up rationalizing, but it’s not as simple as “didn’t notice confusion”. I definitely did notice it. Just when I am presented with an opposing argument, what I’ll do is that I’ll try to figure out at what points it contradicts my own beliefs. Then I’ll see whether those beliefs are well-founded. If they aren’t, I’ll throw them out and attempt to form new ones, adopting the foreign argument in the process. If I find that the beliefs it contradicts are well-founded, then I’ll say that the argument is wrong because it contradicts these particular beliefs of mine. Then I’ll go up to the other person and tell them where it contradicts my beliefs, and it will repeat until one of us can’t justify our beliefs, or we find that we have contradictory basic assumptions. That is what I did here, too; I just failed to examine my beliefs closely enough, and ended up rationalizing as a result. Is this the wrong way to go about things? There’s of course a lot to be said about actual beliefs about reality in terms of prior probability and such, so that can also be taken into account where it applies. But this was a mostly abstract argument, so that didn’t apply, until I introduced an epistemological argument instead. But, so, is my whole process flawed? Or did I just misstep?
Oh, okay. That makes sense. So then what’s the rational thing to conclude at this point? I’m not going to go back and argue with my friend—they’ve had enough of it. But what can I take away from this, then?
(I was using the French term philosophe, not omitting a letter, though. That’s how my history book used to write it, anyway.)
My point was that they probably did mean both things, because the distinction between “it’s impossible” and “I don’t know how” is not really clear in their mind. But that is not as alarming as it would be coming from someone who did know the difference, and insisted that they really did mean “impossible.”
Hmm, I agree, but I don’t think that it adequately explains the entire picture. I think it might have been two different ideas coming from two different sources. I can imagine that my friend had absorbed “applying formalized reason to society is bad” from popular culture, whereas “I don’t know what founding propositions of social justice are”, and subsequently “there might not be able to be such things” (like you talked about), came from their own internal evaluations.
Okay, I’ll bite. What are they?
I kinda wanted to avoid this because social approval etc., also brevity, but okay:
Everybody is completely, equally, and infinitely entitled to life, positive feelings, and a lack of negative feelings.
One must forfeit gratification of axiom 1 to help others to achieve it. (This might be badly worded. What I mean is that you also have to consider the entitlement of others as well to etc etc etc in their actions, and while others are do not have the things in axiom 1, one should be helping them get them, not oneself.)
I know it loses a lot of nuance this way (to what extent must you help others? well, so that it works out optimally for everyone; but what exactly is optimal? the sum of everyone’s life/positive feelings/lack of negative feelings? that’s left undefined), but it works for me, at least.
It seems possible that when your friend said, in effect, that there can never be any axioms for social justice, what they really meant was simply, “I don’t know the axioms either.” That would indeed be a map/territory confusion on their part, but it’s a pretty common and understandable one. The statement, “Flying machines are impossible” is not equivalent to “I don’t know how to build a flying machine,” but in the short term they are making a similar prediction: no one is flying anywhere today.
They seemed to be saying both things.
Actually, and I don’t know if you’ve thought of it this way, but in asking for the axioms of social justice theory, weren’t you in effect asking for something close to the solution to the Friendly AI problem? No wonder your friend couldn’t come up with a good answer on the spot!
Hah, that’s true! I didn’t think of it that way. I don’t know that much about the Friendly AI problem, so I wouldn’t know anyway. I’ve been able to reduce my entire morality to two axioms, though (which probably aren’t especially suitable for AI or a 100% rational person, because there’s no possibility at all that I’ve actually found a solution to a problem I know nothing about that has been considered by many educated people for long periods of time), so I thought that maybe you could find something similar for social justice (I was having trouble deciding on what to feel about certain fringe cases).
I don’t think your friend’s point of view is impossible to argue against (as I mentioned in my other comment you can argue based on results)
I’m talking hypothetically. I did allow myself to consider the possibility that the idea was not perfect. Actually, I assumed that until I could prove otherwise. It just seemed pretty hopeless, so I’m considering the extreme.
it’s not obvious to me that you’ve correctly understood your friend’s point of view
Maybe not. I’m not angry at my friend at all, nor was I before. I felt sort of betrayed, but my friend had reasons for thinking things. If (I think) the things or reasons are wrong, I can tell my friend, and then they’ll maybe respond, and if they don’t, then it’s good enough for me that I have a reasonable interpretation of their argument, unless it is going to hurt them that they hold what I believe to be a wrong belief. Then there’s a problem. But I haven’t encountered that yet. But the point is that it, to me, is much more interesting/useful/not tedious to consider this idea that challenges rationality very fundamentally, than to try and argue against the idea that everybody who had tried to apply rationality to society had it wrong, which is a very long battle that needs to be fought using history books and citations.
I still think you are focusing too much on the semantic content of the conversation.
Then what else should I focus on?
You are a human, and humans get angry for a lot of reasons, e.g. when other humans challenge their core beliefs.
I like having my beliefs challenged, though. That’s what makes me a rationalist in the first place.
Though, I have thought of an alternate hypothesis for why I was offended. My friend compared me to white supremacist philosophes from the early days of the Enlightenment. And when I said that I did not share their ideas, my friend said that it was not because of my ideas, but because I was trying to apply rationality to society. And maybe that offended me. Just because I was like them in that I was trying to apply rationality to society (which I had rational reasons for doing), I was as bad as a white supremacist. Again, I can’t be mad at my friend, since that’s just a belief they hold, and beliefs can change, or be justified. My friend had reasons for holding that belief, and it hadn’t caused any harm to anybody. But maybe that was what was so offensive? That sounds at least equally likely.
It took me the whole day to figure even that out, really. Stress from other sources was definitely a factor, but what I observed is, whenever I thought about that idea, I got very angry, and got sudden urges to throw heavy things. When I didn’t, I was less angry. I concluded later that I was angry at the idea. I wasn’t sure why (I’m still not completely sure: why would I get angry at an idea, even if it was something that was truly impossible to argue against? a completely irrefutable idea is a very special one; I guess it was the fact that the implications of it being right weren’t present in reality), but it seemed that the idea was making me angry, so I used the general strategy of feeling the idea for any weak points, and seeing whether I could substitute something more logical for inferences, and more likely for assumptions. Which is how I arrived at my conclusions.
Oh. Well, that was a while ago, and I get over that stuff quickly. Very few people have that power over me, anyway; they were one of the only friends I had, and it was extremely unusual behavior foming from them. It was kind of devastating to me that there was a thought that was directed at me by a trusted source that was negative and I couldn’t explain… but I could, so now I’m all the more confident. This is a success story! I’ve historically never actually committed sucide, and it was a combination of other stress factors as well that produced that response. I doubt that I actually would, in part because I have no painless means of doing so: when I actually contemplate the action, it’s just logistically impossible to do the way I like. I’ve also gotten real good at talking myself out of it. Usually it’s out of a “that’ll show ’em” attitude, which I recognize immediately, and also recognize that that would be both cruel and a detriment to society. So, I appreciate your concern for me a lot, but I don’t think I’m in any danger of dying at all. Thanks a lot for caring, though!
Well, the friend had counterexamples to “math as a basis for society is good”. I sort of skipped over that. They mentioned those who rationalized bad things like racism, and also Engels. (We both agree that communism is not a successful philosophy.) Counterexamples aren’t really enough to dismiss an idea unless they’re stronger than the evidence that the idea is good, but I couldn’t think of such evidence at the time, and I still can’t think of anything particularly convincing. There’s no successful society to point at that derived all of its laws and givernment axiomatically.
How to offend a rationalist (who hasn’t thought about it yet): a life lesson
Wasn’t that what Einstein said about QM?
I once conducted an experiment in which I threw a die 500 times, and then prayed for an hour every day for a week that that die consistently land on a four, and then threw the die 500 more times. Correlation was next to zero, so I concluded that God does not answer prayers about dice from me.
Right, that’s true. In the particular case of The Things They Carried, I’d trust O’Brien moderately well to depict what the Vietnam War was like, since he participated in it.