How does this help me become more rational?
helltank
That’s ridiculous. So mild pains don’t count if they’re done to many different people?
Let’s give a more obvious example. It’s better to kill one person than to amputate the right hands of 5000 people, because the total pain will be less.
Scaling down, we can say that it’s better to amputate the right hands of 50,000 people than to torture one person to death, because the total pain will be less.
Keep repeating this in your head(see how consistent it feels, how it makes sense).
Now just extrapolate to the instance that it’s better to have 3^^^3 people have dust specks in their eyes than to torture one person to death because the total pain will be less. The hair-ripping argument isn’t good enough because pain.[ (people on earth) (pain from hair rip) ] < pain.[(people in New York) (pain of being nuked) ]. The math doesn’t add up in your straw man example, unlike with the actual example given.
As a side note, you are also appealing to consequences.
The point is that to an AI, we are but massive, stupid beings who are attempting to teach them minor symbols with massive overuse of resources(that few lines of code to define “rock” could be used by a sufficiently powerful UFAI to, say, manufacture nukes).
I’m there with one other person. Look to the lesswrong Singapore google group for any future updates. https://groups.google.com/forum/m/#!topic/lesswrong-singapore/cXtHTMQO4xw
On religion:
Faith is corrosive to the human mind. -Susan Blackmore
I’ve never really thought just how damaging blind faith was to my thought processes until I read this quote. It strikes a chord with me.
Finally, a Singapore meetup! Will definitely be there.
For me, the problem with this is that if I’m speaking to an autistic person(and a very large number of LWers identify themselves as on the autistic spectrum), they tend to use literal meanings very often. In fact, some of them(including me) get offended or confused when they say something literal and it is interpreted as sarcastic or subtext.
Suppose I am speaking to an autistic person, and he says, “I am 87% confident that X is true.” The issue with this statement is that a lot of people use this sort of statement in a metaphorical sense(ie. they just pull the number out of their butt to make it oddly specific and get a cheap laugh) but an autistic or rationality-trained person may literally mean that they are 87% sure it is true, especially if they are good at measuring their own confidence levels. In this case, the usual situation- the number being picked randomly—is false.
There are also, however, a large number of statements that are almost always meant sarcastically or in a non-literal way. The statement “I, for one, welcome our new alien overlords” is almost always sarcastic as it is 1) invoking a well-known meme which is intended to be used in this manner and 2) it is extremely unlikely that the person I am speaking to is actually someone who wants aliens to take over the world. These statements are, for want of a better word, “cached non-literal statements”(as in, it is an automatic thought that these statements are not literal), or CNLS for short.
It might be useful to append the guideline “All statements have a probability of being literal that is worth considering, except in the case of CNLSes. This probability is adjusted up if the person you are speaking to is known for being extremely literal and adjusted down if they are known for using figurative speech(although that last sentence should be fairly obvious, I throw it in for the sake of completeness)” to your thesis.
This actually got me thinking if there is a methodical, objective and accurate way to find out if someone’s statement is literal or not, perhaps by measuring their posture, tone of voice. The only difficulty is to try to weasel some quantifiable data out of context. If it can be done, that would be a great resource to people who have trouble understanding the non-literal meanings of statements everywhere.
I have to go to bed soon, therefore I will not write up a long post but leave you with this short statement:
Yes, there is such a point in our rationality training. You underestimate the amount of work needed to get there. I do not think that I can reach that point within the next 30 years; and everyone on LW would have to reach that point to argue effectively. It only takes a few outraged posters to turn a thread into a shitstorm(see the comments and replies above).
It is indeed a word of caution, just like “do not play with electricity” is a word of caution. Grown adults should theoretically be able to handle electricity without getting electrocuted, but doing so(unless they’re electricians) won’t give them many benefits and there will always be that risk.
I believe that he suggested(he is not a moderator but a random poster making suggestions, remember) that jokes, humor and art not be posted here because this is not a website for jokes, humor and art, unless they somehow have to do with rationality. There are plenty of sites for such things if you really have a pressing need to discuss your love of the Mona Lisa or knock-knock jokes with people on the internet.
If you want my opinion, it’s that a debate about Obama’s healthcare reforms is less likely to improve rationality than a debate about the sequences or some other “traditional” topic. If you really want to apply your rationality skills in a real world context:
It’s right there. Just switch off your computer, go outside and strike up a debate with someone in meatspace.
No problem, and I hope this post taught you how to work better and learn better. If you have problems with procrastination, you can try programs like Beeminder, or simply have a friend act as a watcher to ensure you get your work(or your three new things) done for the day, week or month.
If you are offended by any of gjm’s statements, I suggest you walk away now, because what I’m going to say is going to be just as offensive to you as anything that gjm has posted.
Right, I take issue with your statement that autistic people are irrational, but I think that point has already been made for me. What I am taking issue with now is:
then I think that’s a sad state of affairs.
You believe it is a sad state of affairs that people on LessWrong are discouraged from discussing topics that will harm people more than benefit them? Am I correct in therefore saying that you believe it is a sad state of affairs people on LessWrong are discouraged from doing stupid and irrational things? Because if so, that doesn’t seem like a sad thing at all.
Consider the case where political commentary is viewed as just as acceptable a topic of debate as any other. Yes, it would be ideal to have everyone here so rational they can discuss politics freely, without risking harm to their rationality. Yet it is a fact that Politics is the Mind-Killer, and this is not going to go away and it is not going to change because you believe in freedom of speech. And I don’t think this is a particularly sad state of affairs, for the very fact that people avoid things that make them irrational is a promising sign that they value their lack of bias.
But you seem to think that the freedom to say silly things like “autistic people are less rational than others”, or to bring up disruptive topics, outweighs that consideration.
At this point, I would like to recommend that you close the window right now, turn away from the computer and think hard about whether complete freedom of speech is one of those things that, in the minds of some people, automatically equals a win. I can’t recall the technical term for it, but I do recall quite strongly that it will kill your mind.
Learn Three Things Every Day
I did the survey.
I’ll just point out that I actively cut off relationships with people of no value before I read this. Therefore, your argument that non-cultists don’t cut off relations with zero valu people is incorrect in at least one case and possibly more: as it is the core of your argument, your argument in at least one case and possibly more.
Okay, thanks for the update and of course the idea of measuring agentness, while simultaneously being careful not to apply the halo effect to agentness, is fundamentally sound. I would propose treating the perceived agentness of a certain person as a belief, so that it can be updated quickly with well-known rationalist patterns when the shift moves to another domain.
Let us take the example of a person who is very agenty in managing relationships but bad at time management, as given in your post. In this case, I would observe that this person displays high levels of agentness in managing relationships. However, this does not equate to high agentness in other fields; yet it may be an indication of an overall trend of agentness in his life. Therefore if his relationship agentness level is 10 I might estimate a prior of his agentness at any random domain to be, say, 6.
Now, suppose I observe him scheduling his tasks with a supposed agentness of 6 and he screws it up completely, because of an inherent weakness which I didn’t know about in that domain. After the first few times he was late I could lower my belief probability that his agentness in that domain (time management) is actually 6, and increase the probability of the belief that it is 3, for instance, plus a slight increase in the numbers adjacent (2 and 4).
However, cached thoughts do interest me. We have seen clearly that cached thoughts can act against agentness; but in my opinion the correct path is to make cached thoughts for agentness. Say you discover that in situation X, given Y and Z, A is almost always(or a sufficiently high percentage chance) the most agenty option. Then you can use your system 2 to train your system 1 into storing this pattern, and in future situations you will reflex perform A, with the slow-down consideration given depending on how high the chance that the agenty option is not A after all times its disutility and so on.
I would say that cached thoughts are very interesting phenomena, being able to control the first actions of a human being(and the actions that we, being impulsive creatures, normally take first), and that with proper training it might even be possible to use them for good.
I will probably read this post in more detail when the font isn’t hurting my sleep-deprived eyes. Please fix!
27chaos, that is a very interesting paper and I thank you for the find. It’s actually quite a happy coincidence as neural networks (having been prompted by the blegg sequence) was on my next-to-study list. Glad to be able to add this paper to my queue.
Very useful and instructive post. I would like to comment that one of the biggest tests(or so it seems to me) to check if a belief chain is valid or not is the test of resistance to arbitrary changes.
You write that systems like [I was abused]<->[people are meanies] <-> [life is horrible] <-> are stable and this is why people believe them; because they seem to hold sound under their own reasoning. But they are inherently not stable because they are not connected to the unshakable foundation of the source of truth(reality)!
Suppose you apply my test and /arbitrarily change one of the beliefs/. Let’s say I decide to change the belief [I was abused] to [I was not abused] (which is an entirely plausible viewpoint to hold unless you think that everyone is abused). In that case, the whole chain falls apart, because if you were not abused, then it does not prove that people are meanies, which in turn implies a possible non-terrible world. And therefore the system is only stable on the surface. A house is not called solid if it can stand up; it is called solid if it can stand rough weather(arbitrary changes) without falling.
Let’s look at the truthful chain [Laws of physics exist] <-> [Gravity exists] <-> [If I jump, I will not float]. In this case we can arbitrarily change the value of ANY belief and still have the chain repair itself. Let’s say I say that the LAWS OF PHYSICS ARE FALSE. In that case I would merely say, “Gravity- supported by the observation that jumping peoples fall- proves or at least very strongly evidences the existence of a system of rules that govern our universe”, and from there work out the laws of physics from basic principles at caveman level. It might take a long time, but theoretically it holds.
Now, if I say that gravity does not exist, a few experiments with the laws of physics → gravity will prove me wrong. And If I decide to claim that if I jump, I will not fall, gravity, supported by the laws of physics, thinks otherwise(and enforces its opinion quite sharply).
The obvious assumption here is that there is a third person saying “these things are false” in the second example as opposed to god making a change in the first. But the key point is that actually stable (or true) belief chains cannot logically support such a random change without auto-repairing itself. It is impossible to imagine the laws of physics existing as they are and yet gravity being arbitrarily different for some reason. The truth of the belief chain holds all the way to the laws of physics down to quantum mechanics, which is the highest detail of reality we have yet to find.
It seems clear to me that the ability to support and repair an arbitrary chain is what differentiates true chains from bad chains.
Thanks a lot. I really appreciated that comment.
A psychopath would have no problem with this, by the way; he’d just step on the heads of people and be on his merry way, calm as ever.
I don’t think people have a right to lie to other people. I also can’t understand why you would regret breaking up with someone so truth-averse and horrible.