Open thread, Feb. 23 - Mar. 1, 2015
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
I’m drafting a post for Discussion about how users on LessWrong who feel disconnected from the rationalist community can get involved and make friends and stuff.
What I’ve got so far: Where everybody went away from LessWrong, and why How you can keep up with great content/news/developments in rationality on sites other than LessWrong *Get involved by going to meetups, and using the LW Study Hall
What I’m looking for:
A post I can link to about why the LW Study Hall is great.
Testimonials about how attending a meetup transformed social or intellectual life for you. I know this is the case in the Bay Area, and I know life became much richer for some friends e.g., I have in Vancouver or Seattle.
A repository of ideas for meetups, and other socializing, if somebody planning or starting a meetup can’t think of anything to do.
How to become friends and integrate socially with other rationalists/LWers. A rationalist from Toronto visited Vancouver, noticed we were all friends, and was asking us how we became all friends, rather than a circle of individuals who share intellectual interests, but not much else. The only suggestions we could think of were:
and
These are bad or impractical suggestions. If you have better ones to share, that’d be fantastic.
Please add suggestions for the numbered list. If relevant resources don’t exist, notify me, and I/we/somebody can make them. If you think I’m missing something else, please let me know.
I would advise against this. First become friends, then move to the same house. Some people are insufferable, and you don’t want to become their roommate just because one day they decided to visit your meetup and liked it. Not everyone who enjoys visiting your rationalist meetup is a good friend material.
I think it would be good to have a debate about “what do you want to get from LW? / why are you here?” Different people will give different answers. People who gave the same answer now do have something to talk about that is interesting to all of them. Some people want to change the world. Some people just want to chat. Both is a valid goal, but if you mix those people together, it may be frustrating for all of them. (“Why are we just talking and wasting time? Is that the most rational thing to do?” “Why do you always insist on doing something? I have enough work outside of LW. I come here to relax and have an intelligent debate.”)
Do something that is not merely sitting and debating, e.g. take a walk together. Or maybe go to a gym.
I was exaggerating for effect: my friend didn’t actually recommend strangers become roommates. Sorry about that. ‘Becoming roommates’ actually seemed to me a bad suggestion finding a residence to accommodate everyone, timing of people’s lives, lifestyle considerations, etc., make it very complicated. Anyway, I approve of your debate idea. I’ll include caveats about how tips for being involved hinge on knowing what one wants to get out of LessWrong and the surrounding community.
Obvious suggestion: Map of the rationalist community.
Tumblr also has a pretty active aspiring rationalist community and they also have a skype group. When I was still on it, the skype group was a very useful resource since you always had people to talk about “rationality topics,” which isn’t a common resource in everyone’s life.
Kaj Sotala wrote a pdf called “How to run a Less Wrong meetup” or something like that.
Undeveloped response for item 4.
To my knowledge people form close bonds in two ways. First: sharing things which make them feel vulnerable. Those things you feel as though you need to trust someone a lot before you can tell them. Paradoxically, telling people these things creates trust in them and encourages friendship. Just don’t give too much too quickly. Plan: create exercises where people are directed in sharing intimate things about themselves, perform these at meetups.
Second: Jointly overcoming adversity or challenge. When a new member joins a meetup, take them to do things you are good at and help them become good at it too. Then switch roles.
There are two levels to being friends: (1) Emotional exchange. Deep and meaningful discussions about important issues in our lives makes us bond together.
(2) Doing things together outside of the meetup. Are you going to an event that’s interesting for rationalists? Ask people from the meetup to come along with you. Do some people of the meetup organize an event? Go there.
Fyi my expectation is to start posting on March 1 a daily sequence of articles on the fundamentals of economics. There are 18 of them last I counted. If all goes well, have plans for and am working on 2.5 more sequences of similar length on the price system, welfare economics, general equilibrium theory....
A few days back I made a ZeeMap for LessWrong and promised to post it in an Open Thread, so here it is: https://www.zeemaps.com/map?group=1323143
It’s basically a way of finding people that share a particular interest living nearby. There is no need to register—just add your marker and keep an eye out for someone living near you.
I like these things, and I’ve added myself, but… there’ve been similar ones in the past, they don’t stay up to date. I feel like without some way for users to know, six months from now, that the information they’re looking at is still accurate, any given iteration is never going to be very useful for long.
An idea that’s floating around in my head, I think I’ve mentioned it before, is that a service like this could require everyone to occasionally confirm that they’re still in the same place. Send out an email once a month, they can click a link or reply to remain on the map, or ignore it to be removed.
I would expect that people move so rarely that the information would remain mostly up to date for years or even decades. Am I wrong?
I can also add a comment somewhere telling people to notify me if they tried to reach someone and failed. In that case I would try to contact the person to see what’s up and, failing that, remove the marker myself.
I’ve added a marker but wonder whether this has long-term possibilities. It seems not many people have added markers yet (although this may relate to the readership of open threads).
Regarding physical movement, it seems to me that lots of LWers are relatively mobile, even measured on scale of years. examples: high school students who travel to university or elsewhere for work reasons; university students who finish & get employment elsewhere; postgraduate students, contract researchers, academics. Also seems like there are plenty of people willing to consider alternative locations based on future opportunities.
My concerns are not just related to physical movement but also virtual movement—think of the number of members who were active here even 2-3 years ago but are no longer involved. Or just how short-lasting many free email addresses are. Offering to actively manage the map is thoughtful but what if you decide to move on from LW? Particularly as you are a relatively recent joiner, how much confidence should we have that you will be around to manage the location map in future?
All that said, I might PM a member from looking at the map if I happen to be visiting their location … assuming their posting history suggests it might be worthwhile.
You guys worry too much :)
Personally, I’d be happy to meet someone who has read LW even if they aren’t around the community anymore, so I don’t get what the concern is. The map may have “LessWrong” in the title, but it’s really about meeting interesting people.
I’m actually not new—this is just a new account. I don’t see myself leaving LessWrong anytime soon. Regardless, if I disappear I don’t see why someone else can’t take over. I just changed the security settings to allow for anyone to export the data. Gram_Stone suggested circulating the map around survey time. I think that would be a good, predictable time to update people on the state of the map, including notifying them of a possible change of hands.
Even if people don’t move much, they might leave the community, or remain in the community but stop checking LW so that they look like they’ve left.
But it’s not just about whether or not it does stay accurate, it’s also about whether people perceive it to be accurate. If someone expects it to be out of date, they might not even bother trying. I (and I think a bunch of other people) find it somewhat aversive to contact a complete stranger with a request, even if that person has said it’s totally okay to contact them with those requests, and it gets worse if they said it six months ago and might have retracted the offer.
(Related, if I did contact someone and got no reply, I would then find it somewhat aversive to contact you and let you know about it. I might also forget to do that, it’s difficult to trigger off of not getting a reply to an email. And then I’d have to wonder if you’re still active...)
Incidentally, I looked for a previous iteration so we could see empirically whether or not it’s still accurate, but I can’t find it. LW uses the word “map” a lot. I did find a couchsurfing group that no longer seems to exist.
Regardless of how true what you’re saying is, it all sounds like very weak reasons not to try. Some people find aversive to contact an stranger? Whatever, some don’t. I’m sure there’ll be plenty of people who won’t hesitate in seizing an opportunity to meet a fellow LWer, especially if they live in less developed places where meeting interesting people is really hard.
A monthly mail confirmation sounds like more trouble than it’s worth, and people might be put off by the spammy nature of it. I think fixing the map as people report problems is a better approach.
Honestly, your complaints sound like those of someone who already can meet interesting people and has little need for the map in the first place. I’m sure those who really need it won’t have any problems with it. The map doesn’t have to be perfect to be useful.
I’m interested in seeing how previous attempts went however. If you recall any others don’t hesitate to tell me.
I agree, these aren’t good reasons not to try.
When you say more trouble than it’s worth, do you mean for you or for people on the map? I wasn’t saying “you should do this”, I was going for “it might be neat if someone wrote software to do this”. I agree that doing it manually would be a pain in the ass, and would be somewhat spammy because people didn’t opt in for that when they added themselves.
I mean in general. Clicking a confirmation link might not seem much, but when you multiply by every person every month it is. I suspect it would be more work than just fixing problems as they are found, but I might be underestimating the amount of problems that will be found I guess.
Most importantly, I don’t want to risk people not participating because they don’t want to be spammed. An alternative solution to consider is redoing the map every year, but I still feel it’s unnecessary.
I feel like this is the first one I’m aware of that is directly user-editable, which increases the chance of staying up to date, especially if each point is marked with when it was last updated.
EDIT: Oh, it’s not possible to remove or edit markers, only to add them, unless you save the magic URL from when the marker was created, or possibly are some sort of admin or maybe pay money. :-(
For my own reference, the magic URL to delete or edit my marker is: https://www.zeemaps.com/edit/WCXZQ_0pN-M2vwfx4VBR-g (I see no harm in making this public, since anybody can create an unauthenticated marker with my name on it in any case.)
Just contact me if you lose your URL. I’ll delete the marker and you can make a new one.
This is great. This is totally a thing I can use for this. Thanks!
Maybe this should be linked from the front page?
Maybe. Or maybe it could be mentioned on the meetups page. I’m not the one to make that call.
How do you get a high verbal IQ, boundary-testing, 10-year-old child not to swear? Saying “don’t swear” causes him to gleefully list words asking if they count as swear words. Telling him a word counts as profanity causes him to ask why that specific word is bad. Saying a word doesn’t count causes him to use it extra amounts if he perceives it is bad, and he will happily combine different “legal” words trying to come up with something offensive. All of this is made more difficult by the binding constraint that you absolutely must make sure he doesn’t say certain words at school, so in terms of marginal deterrence you need the highest punishment for him saying these words.
Is there a good reason why he shouldn’t swear, in private, at least? I can see how swearing at school could get him and/or you in a fair bit of trouble, which should be reason enough on its own to stay on the safe side of speech when he’s there, but if your True Rejection to his swearing in general is something to do with tradition or innocence or something like that, I don’t see how you can rightly punish him for saying what he wants to say.
Alex, if you’re reading this, please, seriously, be careful about what you say in public, especially at school. It’s not fair that your father is liable for what you do, but that’s the way the system works, and I think it’s important to put yourself in his shoes so to speak, and think how you would feel if you were responsible for someone who was knowingly taking risks at your expense in addition to his own. Having said that, I feel you have the right to say whatever you want to say, up to and including strong swears, as long as it doesn’t adversely affect other people. The catch though is that it’s hard to know what will adversely affect other people ahead of time, which is why I strongly advise you to err on the side of caution. Remember that I am 16, and thus still a minor in the eyes of the law and society, as you are.
Because it builds a habit? It seems to me that when people are used to swear in one environment, it becomes easier to forget the context and swear in another. Sometimes the words come out of the mouth faster than the brain is ready with the analysis of the environment.
Tell Alex that swearing a lot weakens the value of your swearing as a signal. If you get a reputation for not swearing, then the one time you do, people will take you more seriously than if you used profanity on a daily basis. Also, swearing is a much cheaper signal than any alternatives you might want to signal with if you’d made your swearing meaningless.
This is the actual reasoning I (high verbal IQ, boundary testing 16-year-old) end up swearing < once/month.
Not sure if this will get the result you want, but it will approach what you want.
I just did this. He responded by saying that this means if I want him to be less offensive he should swear more, and then he said f---- you. My laughing in response probably didn’t help my efforts to get him to swear less.
Sorry, hope my suggestion wasn’t too counterproductive.
I don’t think the signal value of swearing is “how much I want to offend this person” but rather “how strong my opinion on this subject is”. Swearing at someone more will probably only make them more offended (if they get offended by swearing in that context at all). However, when the person who swears every day says that Policy X is f—ing scary, people will take them less seriously when the person who swears about once/year does.
Don’t worry, it wasn’t.
This is surely true. On the other hand, there are other ways to convey how strong your opinion is, and if I feel once-per-year-strongly that Policy X is scary, maybe I should be conveying that by some more informative and costly means than just dropping in the word “fuck”. I tend to think that swearing works better as a mild intensifier, for relatively frequent use, than as a once-per-year thing.
I remember reading somewhere that swearing has a mild painkiller effect (e.g. stub your toe and go “fuck!”, less painful stubbed toe), but only if the person doing it rarely swears. I don’t remember where I read this, though.
Was the control group silence or yelling non-profanities? Because saying/yelling “ow” tends to be fairly effective.
This paper compared repeating a profanity to repeating an alternate arbitrary word, not “ow.” (first hit searching “swearing pain” on google scholar)
I don’t remember anything else about the thing I read.
It may only be personal, but in my experience it is the opposite, because that is like telling yourself “oh, how terrible this is,” which of course does not make you feel better, but worse.
If I recall correctly, it increases tolerance for pain which isn’t quite the same as “mild painkiller.” The experiment measured how long you could submerge your hand in freezing water, which participants who were allowed to swear could do for a longer period of time.
All you need to do is be sure that the 10 year old will understand that, and you’re done. Good luck.
(“I did this as a 16 year old” is not really very informative about 10 year olds. And even then you were probably atypical for one.)
I think this can be explained to kids, and I don’t think Alex is the average 10-year-old. “The Boy Who Cried Wolf” is basically a story about “don’t weaken your cheap signals by misusing them”, so a general case is clearly already explainable to kids. I’m pretty sure that Alex is smart enough to understand the concept if explained well, and that James Miller has the teaching skills to explain the concept. I’ve been using this as the reason why I don’t swear before turning 16. I am absolutely atypical for a teenager, but Alex has been described as being more similar to the average LW reader than to the average member of the general population, so this may be applicable.
If the kid is smart enough to understand that, he can also reason as follows: I can tell that you really don’t want me to swear anyway, regardless of that. And if I’m trading off frequency for impact, it’s awfully hard for you to know the exact amount of swearing that optimizes impact. Both of these factors make it especially likely that what you’re telling me is motivated reasoning, so I should ignore you.
Again, a kid wouldn’t phrase it that way, but he might say something like “that isn’t really why you want me to stop!” and basically mean that.
He may even be right. Unless you actually want the kid to swear to a degree that maximizes impact, rather than less than that.
The accurate answer to “why is that particular word bad” is “because everyone else thinks it is.” This answer is also the same for every bad word. And it’s not an answer subject to nitpicking.
If the word doesn’t count, why is this a problem?
It’s not a zero/one thing and there is a cost to punishment and enforcement so there exist words I would rather him not say, but don’t want to punish him for saying.
Is the word ‘gay’ profanity?
It can be used badly, but it can be used in other ways too, so it isn’t a “bad word”.
Yes. The question was meant as a reply to your last question. The point is that it isn’t a bad word per se, but it is still annoying at best if you run around using it a lot in all contexts.
Is he bullying or insulting people? Does he lack the machinery to detect social disapproval? Either situation would require specialized advice.
No and No.
1) Identify the conditions which reinforce more swearing and stop doing those things. If saying “don’t swear” makes him more likely to swear, don’t say “don’t swear.” This sounds obvious in retrospect, but is very difficult to implement in practice because it’s frustrating to not have any idea what you should do instead.
2) Identify the conditions which reinforce less swearing and start doing those things more. It doesn’t sound like you’ve identified those conditions yet. This is the most important step.
To what extent does ignoring the problem work? There are probably certain areas where you should be ignoring it, and certain areas where you need to get more creative. The problem with using a strong punishment as your method as you’ve identified is that it can only be implemented after he gets home from school. Rewards for good behavior are generally preferable under such a condition as rewards are generally better than punishment at maintaining behavior. The way his teachers handle punishing the situation is probably going to matter a lot more than anything you do.
I just read a book on behavior and that’s the kind of thing I would expect to read in that book: Attention is generally a reinforcer. Swearing can be reinforced by attention. When you stop paying attention to swearing, swearing stops (extinction). Of course that will only stop the child from swearing when talking to you, not when it’s in school.
Why don’t you tell him the reasons you don’t want him to swear? I assume you have reasons, but maybe you’ve never needed to articulate them before. I’m guessing your reason is something along the lines of “lower status people swear, I don’t want people to think of you as lower status”. I imagine a high IQ 10-year-old can understand that.
Also, if swear words are a fun and exciting thing to him, why not teach him all the swear words so he can increase his status among his friends?
Fear of this happening:
Vice Principal: “Your teacher says you said the word.....which is very hateful towards.....”
Child: “Yes, I learned it from my dad.”
Yes, and what can the Vice Principal do to you?
My son’s teacher creates special projects for him to work on to better challenge him. I very much don’t want that to stop.
OK, a valid reason.
I imagine that your son doesn’t want it to stop either. Perhaps a roleplaying exercise (or just a discussion) about the implications and possible results of swearing at school would be in order?
Although it sounds like he does not plan to swear at school, and believes he has enough control not to let a swearword slip out. In which case it might be best to leave ‘swearing at school’ on the table unless he actually does it, in which case a lesson on contrition might be in order.
Swearing at home is a harder problem, I fear, if there’s no clear and articulable reason why he ought not. Although I see you mention words that are “attacks on groups”, which makes it sound like he’s using slurs; if that’s the case, I would suggest that a broader, serious talk about the historical context of them is in order. I expect he is plenty capable of understanding it. (But in explaining that, you will probably want to concede that some words are much worse than others, even in the realm of “things we’d prefer you didn’t say at home.”)
He doesn’t ever say slurs with the intention of slurring a group, and we have successfully stopped him from repeating certain bad words by using the strategy you mentioned of giving the historical context and via the normal parental approach of picking our battles and being extremely firm on him not saying slurs after we have told him that a swear word is an attack on a group. Although anyone wanting to slur an entire group seems so silly to him that it’s hard for him to emotionally internalize the harm some people feel at hearing certain words.
So you have successfully stopped him from saying bad things where you were able to articulate a reason?
Can you articulate a reason for the swear words that are still a problem?
What would happen if he did say those words at school? Would they expel him? Does he know what the consequences of saying those words at school are, and does he think these consequences are insufficiently bad to act as an effective deterrent?
Maybe this is a bit tentative, and I’m certainly not a parent myself, but:
Sometimes, having an authority figure tell you what to do can be status-lowering, even when the authority figure in question is trying to give you useful advice, or force you to do something which benefits yourself. Would it be possible to preface future conversations where you need to steer Alex in a different direction with a few statements that build up his self esteem, and end the conversation by praising him for doing a good job on something or other?
I’m an adult, and this sort of thing works on me, too.
What is the reason you don’t want him to swear? Maybe you could tell him that.
A few thoughts:
It sounds like he’s being rebellious. Separate the rebelliousness from the question of profanity, and discuss them separately. You might say something like “Asking questions if you genuinely want to understand something better is great, but asking questions to try to frustrate or annoy me is not. I’m getting the sense that you’re doing the latter.”
If he persists, put your foot down—but be really clear that it’s for the intent to annoy you, rather than because he’s asking questions in an attempt to honestly understand something.
It may also help to ask him, openly and gently, why he’s being rebellious. Sometimes rebelliousness comes from a perception that the rules are arbitrary or unfair. If you understand what he’s feeling, you can be in a better position to address those underlying causes. For example—Larks’s suggestion that sharing the reasons you don’t want him to swear may help. And maybe it would also help him to explain how this is a subjective issue, highly dependent on things like tone and social context, and perfectly clear rules are unfortunately impossible.
Telling someone who tries to wage status conflicts with you that you want him to stop fighting for more status is pretty pointless.
I’m not sure “status conflict” is the only possibility here; for example, the terminal value might be something like autonomy, or feeling genuinely listened to.
He is occasionally openly rebellious on a small scale. I suspect that being rebellious towards parents is a terminal value for many 10-year-old boys.
Good idea.
I suspect it’s a more important terminal value for 11-year-old boys, and yet more important for 12-year-olds… :-/
We have told him that other people will think less of him if he swears, that some words are attacks on groups and hearing these words will cause emotional discomfort to members of these groups, and that his swearing causes his mom some discomfort. I have told him that while I am not inherently bothered by him swearing, I don’t want him to do it around me because it will make it more likely that he will swear at school, but he claims that his swearing at home doesn’t increase the likelihood of him swearing at school.
Rationalists make bets. Is he ready to make a bet on this claim? (Framing this as “bet” could be better than framing it as a “punishment”. Of course there needs to be some reward if he wins the bet, otherwise there is no incentive.)
The 10 year old is probably risk averse on this, so would have good reason not to take such a bet—pretty much anything that he would lose for losing the bet is a loss that he wouldn’t be able to absorb, and no sane parent is going to give him a punishment that is small enough that he can absorb it. Of course, being a 10 year old, he wouldn’t use the term “risk-averse” and may phrase it in an awkward way or be unable to put his objection into coherent words at all, but that’s basically what he’d be doing.
So the fact that he would likely refuse such a bet (unless he is overly optimistic, also a possibility) won’t prove that he is wrong. As a parent, you could always lie and say “if you won’t take the bet, that means you don’t really believe it” but you would be lying, so whether you should do that depends on what you think about using bad reasoning to make kids obey.
On the other hand, it would be perfectly rationalist to say “I’m a grownup. I know how people, and especially kids, act. If I let you swear at home you will say it somewhere else, so no, I’m not going to let you swear at home.” You’re not criticizing his reasoning, you’re asserting that you have better priors than he does, and you most likely do.
Tough! Many adults fail to understand that they’re not perfect rational agents, so instead that their habits really matter. I guess on the bright side this could be a good opportunity to teach him that he should not encultivate habits that raise the psychic cost of virtuous behaviour, even if those habits are themselves not inherently vices.
Are there some funny faux-swears (something that a cartoon pirate would use) that he could enjoy? Use those a lot at home.
That would have worked on me at five, but by ten I would have worked out what’s coming from my dad and what’s actually disapproved of by the broader culture, and if I was using the latter there’d be a reason for it. Not necessarily a very good reason, but a reason.
Not sure how good a model ten-year-old me is, though. I was a verbally precocious child.
I don’t have kids so take this with a grain of salt. Just give a disappointed/disapproval look every time he swears. Maybe practice in the mirror. Let guess culture work its magic.
You or your son might find this lecture on swearing helpful: http://blog.practicalethics.ox.ac.uk/2015/02/on-swearing-lecture-by-rebecca-roache/ And here’s the audio: http://media.philosophy.ox.ac.uk/uehiro/HT15_STX_Roache.mp3
With a small child, say a four year old, swearing is usually just testing boundaries and part of the natural process of learning language. But for a 10 year old it’s likely that he/she is being influenced by outside forces. Perhaps the child is trying to emulate other kids. I strongly suggest you try to find the influencing factors and attack the problem from that angle.
I understand the pragmatic considerations for inhibiting swearing, but he seems so smart that he should be allowed to swear. You should just tell the school he is too smart to control, but they can try themselves.
I wish I was 10 so I could befriend him.
People use PredictionBook to make predictions about many heterogeneous questions, in order to train calibration. Couldn’t we train calibration more efficiently by making a very large number of predictions about a fairly small, homogeneous group of questions?
For instance, at the moment people are producing a single probability for each of n questions about e.g. what will happen in HPMOR’s final arc. This has a high per-question cost (people must think up individual questions, formalize them, judge edge cases, etc.) and you only get one piece of data from each question (the probability assigned to the correct outcome).
Suppose instead we get some repeatable, homogeneous question-template with a numerical answer, e.g. “what time is it?”, “how many dots are in this randomly-generated picture?”, or “how long is the Wikipedia article named _?”. Then instead of producing only one probability for each question, you give your {1,5,10,20,...,90,95,99}-percentile estimates. Possible advantages of this approach:
Questions are mass-produced. We can write a program to ask the same question over and over, for different times / pictures / Wikipedia articles. Each question gives more data, since you’re producing several percentile estimates for each rather than just a single probability.
The task is simpler; more time is spent converting intuition into numbers. There’s less system II thinking to do, more just e.g. guessing how many dots are in a picture. You only need to mentally construct a single probability distribution, over a 1-dimensional answer-space, then read off some numbers, rather than constructing a distribution over some high-dimensional answer-space (e.g. what will happen in the final HPMOR arc), deciding which outcomes count as “true” vs “false” for your question, then summing up all the mass counted as “true”.
Possible disadvantages of this approach:
Less entertaining—speculating about HPMOR’s final arc is more fun than speculating about the number of dots in a randomly-generated picture. IMO purchasing fun and skill separately is better than trying to purchase both at the same time.
If calibration training doesn’t generalize, then you’ll only get well-calibrated about numbers of dots, not about something actually important like HPMOR. I’m pretty sure that calibration generalizes, though.
Making predictions trains not only calibration but also discrimination, i.e. it decreases the entropy of your probability distribution. Discrimination doesn’t generalize much. Improved discrimination about numbers of dots is less useful than about other things.
Heterogeneous “messy” questions are probably more representative of the sorts of questions we actually care about, e.g. “when will AGI come?”, or “how useful would it be to know more math?”. So insofar as calibration and discrimination do not generalize, messy questions are better.
The estimates you mass-produce will tend to be correlated, so would provide less information about how well-calibrated you are than the same number of estimates produced more independently, I think.
Overall, I’d guess:
The homogeneous mass-prediction approach is better at training calibration.
You should use domain-specific training in the domain you want to predict things in, to develop discrimination.
It’s inefficient to make heterogeneous predictions in domains you don’t care very much about.
An alternative, roughly between the two groups discussed above, would be to find some repeatable way of generating questions that are at least slightly interesting. For instance, play online Mafia and privately make lots of predictions about which players have which roles, who will be lynched or murdered, etc. Or predict chess or poker. Or predict karma scores of LW/Reddit comments. Or use a spaced repetition system, but before showing the answer estimate the probability that you got the answer right. Any better ideas?
The Credence game does this for better or worse. Saying which of two metals has the higher boiling temperature on the other hand isn’t that fun.
The key issue is getting interesting question templates.
You can predict how long tasks/projects will take you (stopwatch and/or calendar time). Even if calibration doesn’t generalize, it’s potentially useful on its own there. And while you can’t quite mass-produce questions/predictions, it’s not such a hassle to rack up a lot if you do them in batches. Malcolm Ocean wrote about doing this with a spreadsheet, and I threw together an Android todo-with-predictions app for a similar self experiment.
I think homogenous mass prediction gets you better calibrated at predicting the kind of questions in the sample group but doesn’t help a lot otherwise. If you have ever played the the calibration game that someone made a couple years ago it’s a lot like that. After using it for a while you get better and better at it, but I don’t thikn that made me better calibrated in general.
Quiz culture in India
In India, there’s been a move away from simple questions about general knowledge towards complex questions which take a combination of general knowledge and deduction.
Deductive questions would probably be harder for a computer program than general knowledge questions, and inventing good deductive questions would be a lot harder.
There is a very popular Russian TV game show “What? Where? When?” (there is also a competitive version of the game that is played by a lot of people (a lot of questions are available here, you can use machine translation to get the idea what they are like)) whose questions require even more lateral thinking and intuition than those of Indian quizzes. A few episodes are available on youtube. There even was a short-lived US version (which slightly differs from original game) of “What? Where? When?” called “Million dollar mind game”, but somehow it failed to gain popularity in the US.
It made a huge influence on quizzing culture in ex-communist countries. Most pub quizzes (even those that are completely unrelated to the competitive version of “What? Where? When?”) at least attempt to set up their questions in such a way that it is possible to work out the answer to a given question using clues that are hidden within that question itself.
However, the pool of questions suitable for humans seems to be somewhat larger than the pool of questions that would be suitable for computer programs, because very often the fact that deduction and lateral thinking are necessary to solve a particular question rely on question being based on very obscure facts that players are unlikely to know. In some (but not all) cases a computer program might be able to find the answer in a very large database, instead of trying to work out the answer from clues hidden within a question itself, which, I think, is the goal.
Posting to register my great admiration for “Что? Где? Когда?” It was a great show, and was a part of what inspired me as a kid to get into math/science. A lot of the contestants were very smart grad students, as I recall. I am very sad there isn’t such a show in the Western world, it’s all of the form of “* got talent” which is about picking entertainers, or “Jeopardy” which is about silly trivia. :(
KVN is also partly about wit/quick thinking, of course. My hometown’s KVN team (“the Odessa gentlemen”) was universally feared :).
I had the idea a long time ago to make a PredictionBook commandline tool, but I never got past designing the interface and outlining what the basic functionality should be. So I’m dropping it here in case anyone might find it interesting.
Interface to google.com: “argument site:predictionbook.com -site:predictionbook.com/users/″
Returns: select all hits, spit out the pages, newline delimited:
returns: ‘ID URL’
Comment and prediction fields must:
enforce length limits on prediction and comments
enforce 0-100 on probabilities (warn on 0 or 100)
How much of a solved problem is ergonomics when using computers?
I did some googling and found a lot of conflicting information, but I seem to remember there being at least some discussion on ergonomics on LW before.
This is potentially high value topic, since many people spend a lot of time at their computers and there are also likely some low hanging fruit for people who are doing things drastically wrong.
I’m particularly interested in the best choices for computer mice, and which would be best for avoiding RSI and carpal tunnel and the like. I hear things about trackballs and vertical mice?
I had thought the only major issues were making sure your elbows were aligned with the keyboard and mouse, not hunching over, and keeping the computer monitor level with your eyes. The rest seemed minor. Is there more to it than that?
I’ve recently seen several unconventional business models proposed. For instance:
Robin Hanson proposed “egg futures”: a business could buy eggs from fertile women and freeze them, betting that in the future the donors would become infertile and buy their eggs back at higher prices.
There have been proposals that people could sell “interrupt rights”, so that advertisers could pay to advertise to them, at a rate set by each person according to their own valuation of their time.
The startup MonkeyParking used to allow users to buy and sell public parking spaces, taking advantage of the fact that parking spaces’ value is greater than their price.
Has anyone made a compendium of these sorts of unconventional business models?
Edit: I’m guessing the downvoter found something offensive. I’m not saying these are good business models (though I do think they’re all ethical), only that I’d like more output from the sorts of mental algorithms that generated them.
Yes, successful unconventional business models are called startups which made it, and unsuccessful unconventional business models are called startups which failed.
Maybe walking backwards in familiar surroundings can be used for self-calibration? Immediate feedback, freedom to optimize study conditions ( and so make a prior estimate of how well you expect it to work)...maybe a forest/park is better than house, if there are fewer turns to make and you can instead ask yourself ‘will there be an oak right now?’, having passed it on your way forwards...
BTW, there are different biases, and they might influence a person’s thinking by different degrees. How sure are we that calibration using just one method will generally help? (I am asking this here because I have seen statements that ‘calibration generalizes’, but not looked for them, so if you can just throw me a link, I’d thank you.)
For example, yes, estimating probabilities as a habit should make one more wary, and applying rigorous Bayesian reasoning—more accurate, but it would all amount to a figure, a value for posterior probability, and we have seen from the Sequences that people just ignore them. They are given the numbers, and they still just don’t apply them. Is it possible to calibrate with specific biases in mind, so that our pattern-matching abilities would actually help us later?
Like, the Affect Heuristic. Maybe to calibrate against it you should imagine yourself a newbie ‘on the other side of things’, the insurance company—public health lecturer—airport director (or whoever heads it), when you are presented with a choice that triggers an emotional response. Like, your choosing right now is not a heroic responsibility, but you still better get it right.
Congratulate yourself if you are right, fire otherwise. Make a joke about it, because you’re going to fire yourself a lot. ?
Yet another candidate for an alternative model for Quantum Mechanics: Announcement, Pre-print
Of course the prior for such a thing being true is incredibly low. Presumably if these superfluids only satisfy Maxwell’s equations to first order, it may be possible to construct experiments decisively ruling it out.
The luminiferous aether is back!
I’ve been thinking this for a while: here on LW evolution is often portrayed as a weak algorithm that procedes to explore the genetic landscape fumbling around with random mutations. And this is certainly true for natural selection.
We have though also sexual selection: ‘suddenly’ you can choose which genome gets to reproduce thanks to a brain! In principle, brains are Turing complete, and this means that evolution could be much smarter. Of course, even that program is determined by your genetics, and sometimes things can go awry. Still, and more cogent for humans, there is the possibility of ‘smarting up’ evolution.
Random mutations aren’t natural selection. Natural selection is the process that prevents harmful mutations from spreading and that encourages beneficial mutations to spread.
Sexual selection is part of natural selection in the way the term “natural selection” is usually used by biologists.
No, they aren’t. Neural nets don’t work with 1′s and 0′s the way computers do.
Actually, because a human can simulate a Turing machine’s execution using pencil and paper, humans are Turing complete. (I realize the original statement was that brains are Turing complete, but since each human brain usually come equipped with an attached human, it seems reasonable to discuss whether humans, rather than human brains, are Turing complete.)
I don’t think I know a human who would have a zero error rate doing 1,000,000,000 Turing operations.
Nor do I. However, this is irrelevant. In determining whether a system is Turing complete, physical limitations are usually ignored. From Wikipedia:
If we did not ignore physical limitations, no actual computing system would be Turing complete.
Making errors means not behaving as a Turing machine. It’s separate from limitations of memory.
Any physical Turing machine will make errors.
To the extend that it does it’s no ideal Turing machine.
Ideal Turing machines, being, y’know, ideal, do not exist in reality.
It’s a model. Models have it’s use. It makes sense to model a computer as an ideal Turing machine. It doesn’t make much sense to model a human that way.
Nobody suggested modeling humans as Turing machines. The question was whether humans are Turing complete and you implied that they are not because they make errors. By the same standard, no physical device is Turing complete.
That’s largely irrelevant, since even the first computers didn’t work with 0′s and 1′s the way modern computers do.
If by “evolution being smarter” you mean things like “brains becoming intelligent, developing science, and doing genetic engineering”, then yes. But it’s the brains who are smart, not the evolution per se.
The evolution would still fumble around the genetic landscape, except that with brains the local landscape becomes much more complicated, something like a fractal mountain instead of the smooth hills in most of the plain. On a different terrain, the weak algorithm may produce more interesting results. That does not make the algorithm more intelligent.
No, I meant that by sexual selection you can have an algorithm exploring the genetic landscape, you’re not limited to random mutations.
People have had this idea before. It’s called “eugenics”.
It has a bad reputation from its implementation by the Nazis, who might have corrupted it a bit for their other political goals.
But I think even a pure implementation of eugenics is not as good as the other options we have for improving the lives of future humans.
Very little of eugenics’s bad reputation dates from the Nazis. Some of it dates from before and some from long after. In particular, the winners continued their pre-war programs for decades after the war. Eugenics became unfashionable around 1960, a bit late to blame on the Nazis. And I think the emphasis on Nazi associations is even later (maybe 1970 or 1980), after eugenics was clearly losing.
Not quite: eugenics is a set of techniques that isn’t certainly limited to sexual selection. Plus only humans could practice eugenics. Instead, anything with a brain and sex can practice sexual selection.
BTW, ferns have feromons, they might be able to choose their partners. At least, we don’t know yet if they do. Would you call it sexual selection, if it leads to preferential inbreeding/outbreeding/between-species breeding?
I’ve decided to turn supervillain.
As the “Mad Homeopath”, I’m going to gather water with really bad memories, dilute it a hundred times and pour it in my city water supply, demanding they give me absolute control or they will all suffer. HA HA HA! (I probably still have to practice my maniacal laughter).
I’m searching for ideas on how to impress the public with it, how to make it known to the media, how to publish it anonymously (yes, I appreciate the irony of demanding that the mayor cedes control to an anonymous villain).
I can see this going one of two ways:
either I gets the best demonstration of city-wide compartimentalization, at least to the appreciating rationalists;
or I get charged of “induced alarm” by pouring pure water into the city water supply.
Either way I think is going to be a blast :)
I can see a third way: nothing explicit happens, but your name gets put onto the NSA “crazies to watch” list :-/
Oh, and the second way is likely to get you charged with terrorism, with, I expect, rather unpleasant consequences.
IANAL, but terrorism isn’t a well-defined crime in most jurisdictions, this doesn’t match any of the various federal definitions in the US, and I can’t think of any lesser charges that you could make stick by adulterating water with purer water. But on the other hand, accessing a municipal water supply is likely to involve at least a couple of crimes along the way.
The realistic worst-case scenario for this is something like Portland, Oregon’s Mount Tabor reservoir urination saga: you haven’t actually done anything dangerous and everyone (except for a few homeopaths, and I think even they would be more likely to treat it as a stunt) knows it, but you’ve created a very expensive public relations stink for no good reason. Governments tend not to appreciate that.
“Terrorism” is certainly a well-defined crime in the US. See e.g. here, in particular:
(note the word appear to be intended)
The realistic worst-case scenario depends on how pissy the prosecutors are feeling and whether they need a sacrificial goat at this particular moment. See today’s Popehat for a sense of how wide the prosecutorial discretion is.
You’re skipping the first test (“violent acts or acts dangerous to human life”), which is why this doesn’t qualify. No violence, no danger: not terrorism by federal standards.
Think about this from the prosecutor’s perspective. You’re looking at an obvious publicity stunt. Normally this sort of thing would be prosecuted under state law, which is what I was trying to get at with “most jurisdictions”, but let’s say you’re working under the US Code for… reasons. Maybe our perp crossed state lines. Now, you don’t want to encourage this sort of thing, but if you bring terrorism charges it’ll show up in the media (which is what the perp wants), and you know they probably won’t stick: any lab will tell you that the bucket of water the guy threw in the reservoir is completely harmless, and any competent lawyer is going to have the test done. So, knowing that, do you go for the charge that’s flashy but near-guaranteed to embarrass you, or do you go for the easy, uncontroversial charges of unlawful trespass etc., limit it to back-page news at best, and have the idiot pay a hefty fine and do some community service work?
I think you’re missing the point that for a terrorism charge to stick, the threat does not have to be real. If I demand a change in policy towards Backwardistan threatening that I’ll blow up the bomb in my suitcase, I will (probably) be charged with terrorism even if my suitcase contains nothing but dirty underwear.
“I pissed in your water supply, oops” is very different from “I poured some liquid into you water supply and I demand the government do X”.
The threat doesn’t have to be real, but it has to be “violent or dangerous to human life”. The entire point of this exercise is that it isn’t, and wouldn’t be considered so by any reasonable observer. Nor is OP making empty threats of violence, as in your dirty-underwear example: in this scenario, he did exactly what he said he would.
“I demand the government do X or I’m going to fart in my Congressman’s direction at his next photo op” is not a terrorist threat, and this is about as dangerous as that would be.
Well, let’s put it this way. I think this discussion of what might or might not constitute a terrorism threat is fine on a ’net forum (hi, NSA guys!), but I really would prefer not to have to argue this issue with federal prosecutors while sitting in pre-trial confinement...
Let me remind you of the Boston case where there was no threat, no demand, no nothing, the perpetrator was a big corporation—and still the result was a $2m fine (effectively) and the firing of an executive.
Have I not made it clear enough that I think this is a stupid idea? It’s totally a stupid idea, and it will get you arrested if you try it. I just don’t think that it’s likely to crank up the terrorist paranoia machine. That machine might be idiotic and humorless and prone to overreaction, but it’s set up to pattern-match to certain things—bombs, hijackings, threats to airplanes or big dramatic monuments, guys with beards or turbans—that this doesn’t fit.
There’s a big difference between doing something that could be construed as a bomb threat, however tangentially, and aping the forms of a bad superhero movie precisely to make a point about the harmlessness of what you’re doing.
Yeah, well...
To go full trope, get a few hysterical moms screaming at the local bigwigs “The TV said there’s dihydrogen monoxide in our water supply!! OH NO!!! WILL SOMEBODY PLEASE THINK OF THE CHILDREN!!!!” and there might well be pressure to make an example of the miscreant. Do you feel lucky, punk? X-D
Punch up, not down.
I don’t think anybody believing in homeopathy would be concerned. Homeopathic theory wouldn’t predict that anything bad happens.
On the other hand it’s likely blackmail.
Homeopathy would actually predict good results. According to their rule that “like cures like”, this would be expected to help people who are currently suffering from bad memories.
Why don’t you only pretend to put in the water, thus getting the same response with less effort?
Sending difficult-to-source messages in general?
Using TAILS from a public computer would probably work, as long as you are willing send the messages online. Keep in mind that people are unlikely to take prank messages from anonymous email accounts very seriously. (I have only ever gotten form letters from elected officials, and I send well-researched serious messages, so if they ignore that, they’ll probably ignore anything from mad.homeopath@anonymity-friendly-email-service.com.)
There should be some way to order flowers or other objects that you can send with physical notes attached without creating an account linked to you, but it would probably be difficult to anonymously buy a prepaid credit card, I’m not involved in bitcoin-related stuff, and I don’t think there would be enough of a market for bitcoin-based flower delivery for anyone to get involved in it anyway.
This sounds like a generally bad plan, though. It seems likely to end with your local newspaper’s police report column making snarky comments about you.
I thought of a situation in which individuals seem to act irrationally, but I don’t know of any cognitive bias that would cause them to. Some individuals seem willing to fight in wars to “help out” in it despite having a small risk of being killed in it. E.g. some are willing to have a 1⁄100 chance of being killed if they have a 1⁄100,000 chance of causing their nation to win the war, meaning that if they decided not to join the war, their nation would be 1⁄100,000 more likely to lose. However, people seem much less willing to have a 1⁄1 chance of being killed and a 1/1000 chance of causing their nation to win the war. Assuming one’s utility is a weighted linear sum of whether they died or not (with 1 meaning they died and 0 meaning they lived) and whether their nation lost the war or not (with 1 meaning lose and 0 meaning win), I don’t know of any weights that would make it both worth fighting in the former scenario but not worth fighting in the latter. Are people just acting irrationally or is my model wrong? If they are acting irrationally, what bias is causing them to do so?
I think your model is wrong. Humans don’t work that way.
They don’t work that way because they are going to war to increase status, by failing to register small risks, or for some other reason?
Linearity is a very strong assumption. A linear approximation is fine when you’re looking at the difference between a 1% chance to die and a 2% chance to die, but overall the phenomena may exhibit nonlinear behavior.
People fight in war because they want to win, but most are willing to lose rather than die. Choosing to fight means that you’d rather risk of death or injury instead of submitting to your foe. The very act of fighting back already may mean that you get offered better terms. It also signals to others that you won’t submit easily. There are benefits to fighting besides victory.
The cost of defeat is definitely a factor. Losing a war in sub-Saharan Africa won’t affect the average American’s day, but if Mecha-Hitler invaded at the head of an army of Nazi vampires, I’m sure people would be willing to tolerate a much higher risk of death when deciding whether to sign up.
They don’t work that way because humans’ utility is not a weighted linear sum, and humans don’t make decisions on the basis of a single calculated number, anyway.
They may effectively treat the 1⁄100 chance as zero. Or fail to realize in “near mode” what exactly do they risk.
Give them a 1⁄1 chance, and suddenly it is not zero, and the consequences are very clear.
I’m not so sure about that. IIRC, people tend to overweight risks with small probabilities, which would make them more reluctant to take the 1⁄100 risk of death. That said, people tend to be overconfident, which might make them think their chance of death is < 1⁄100, so perhaps this biases would to some extent cancel eachother out.
In many wars, those who fight get a much higher reputation than those who were expected to fight but refused. This has often translated into a reproductive advantage for those who fought. It’s not obviously irrational to want that reproductive advantage or something associated with it.
Maybe it’s slightly-dishonest signaling—the soldiers prefer not to die far more strongly than they prefer that their country wins, but they claim otherwise because societies reward those who profess to share the society’s interests. Society also pays them for being soldiers, with money and social status. Joining the army is also a good costly signal of all-round fitness, resilience, political affiliation, etc.
Fellow effective altruists and other people who care about making things better, especially those of you who mostly care about minimising suffering: how do you stay motivated in the face of the possibility of infinities, or even just the vast numbers of morally relevant beings outside our reach?
I get that it’s pretty silly to get so distressed over issues there’s nothing I can do about, but I can’t help feeling discouraged when I think about the vast amount of suffering that probably exists—I mean, it doesn’t even have to be infinite to feel like a bottomless pit that we’re pretty much hopelessly trying to fill. I have a hard time feeling genuinely happy about good news, like progress in eradicating diseases or reducing world hunger, because intuitively it all feels like such an insignificant part of all the misery that’s going on elsewhere (and in other Everett branches of course, in case the MWI is correct).
I know this is a bit of a noob question and something everyone probably thinks about at some point, which is why I’m hoping to hear what kind of conclusions other people have reached.
I remind myself that I care about each individual that can be helped by my action. Even if there are huge numbers of individuals I can’t help, there are some I CAN help, and helping each one is worthwhile.
Pretty much this.
Focus on what you are doing, and who you are helping, not who you aren’t. This is a broader problem than just EA too—you could think of all the possible achievements or research or inventions or friendships you could make in your life, and thus any particular string of them is irrelevant. But if you don’t focus on that infinity of great things you could do, you’re able to realize this particular life is pretty great too. Think of it in terms of ‘if I wasn’t here, these particular people would be worse off’ (usually quite a long list, even for non-EAs—friends, family, colleagues you help out etc.) and your contribution seems a lot more important :-) After all, you’re only one person of that infinity, so if you help more than one person considerably you’ve actually made a big (relative to you) contribution.
It’s plausible that the black plague was spread by gerbils rather than rats.
It seems to me that I should lower my trust in something, but what? Any claim related to biology which hasn’t been checked against DNA/big data?
You should lower your credence in the specificity of hypotheses. When someone says that there is a theory about “rats,” is it really the genus Rattus or the order Rodentia?
I don’t know about the credibility of the Smithsonian, but most contrarian claims about the cause of the Black Death are complete nonsense and are invalidated by DNA evidence.
DNA establishes that the plague was caused by a particular bacterium that lives in fleas, mainly on rodents. The hypotheses that it was spread directly between humans or by human fleas are possible, but not popular. Almost everyone agrees that it was spread by fleas on rodents, but there is some disagreement about which rodents. It isn’t even clear that there is a single rodent vector. For example, perhaps the fast spread was by rats on ships, but the spread inland was by native rodents.
How certain were you, before, that it was spread by rats?
The thing to lower might be your general confidence in historical information other than what’s clearly supported by multiple sources who were in a position to know.
I was sure it was spread by rats—I didn’t see any reason to doubt what everyone seemed to agree on. Unfortunately, poking round online has reminded me of an article I read within the past few years that I haven’t been able to find again, which included the point that pigeon coops (in England?) weren’t designed to prevent rat depredation, so rats were either absent or not prevalent at the time of the plague there.
This article goes with the gerbil theory, but says that rats probably made a small contribution to spreading plague.
Here’s an argument that it was people, not rodents, which spread black plague in England, and suggests that we don’t know whether it was bubonic plague or some other disease.
rants I’m readying the ‘politics’ section of the Sequences now. The comments are so full of references to pop-culture, I am tempted to either move on or start adding to the pool. Seriously, people, it’s annoying.
Venting is fine. I respect it. Isn’t this going to ring hollow, though? I think all those comments are from a few years ago.
Don’t know. It seems that in more recent comments, there are fewer allusions compared to quotes, even if they may be still context-loaded. It’s just, if people continue to read the Sequences, why aren’t there newer comments in which this tendency would be also seen?
Is there anything on why people prefer to consume, rather than make things themselves?
This thought crossed my mind when my mom bought cookies today. She always buys them, never makes them.
I’m compelled to say something like “lazyness”, but I could say this only applies to my mom. But on a larger scale, what makes people consume, but never produce?
(no ad hominem please)
I’d have thought it pretty much has to be a combination of
laziness
impatience
humility (i.e., thinking others can make better than oneself)
habit
I don’t think laziness and impatience are necessarily vices in this sense. Sometimes you have very good reason to want something quickly. Sometimes you have very good reason to want something without having to spend a lot of effort on it.
Making a batch of cookies takes maybe an hour of work plus a couple of hours of delay (depending on exactly what sort of cookies). More, if you don’t have ingredients to hand. So if you’re in WANT COOKIES NOW mode, or if you don’t enjoy making cookies, or if you have more important things to do, it’s not hard to see how getting them from a shop might seem preferable.
Personally, I love making tasty things and at least 90% of the cookies and cake I eat I’ve made myself. On the other hand, I have no interest in making clothes or bookcases or houses or cars. I do enjoy making software but am orders of magnitude short of having enough time to make better word processors and web browsers and operating systems than the Usual Suspects. I don’t see that any of this is terribly surprising.
There’s another answer, which I think is answering a slightly different question from yours: Because it’s more efficient. Economies of scale, specialization, etc. -- This doesn’t exactly explain why people choose to consume rather than produce, but it explains why society “chooses” to centralize production as it does and why that’s a pretty good choice overall.
(Incidentally #1: yes, it was deliberate.)
(Incidentally #2: I am pleased to report that yes, the internet has already invented the term ad mominem, which means just what you think it does.)
Economics 101.
There are factories that are really efficient at producing cookies cheaply. Making them yourself takes more effort. For most people in Western society it’s more worthwhile to invest effort into their work to earn money to then buy products other people produced.
Buying cookies takes about a minute if you’re already at the grocery store. Making cookies takes an hour upwards depending on what recipe you’re following. Unless your mother is made of free time, paying a small premium in exchange for not having to do it herself makes economic sense—and even if she is made of free time, there may be other things she’d prefer to do with it.
Given that the question is put very generally, here are two general answers:
Lack of skill and/or resources and opportunities. I cannot smelt iron ore and forge my own forks.
Cost-benefit considerations: for great many things it’s much cheaper (in terms of money, time, and effort) for me to buy them rather than make them.
I can’t see how to make this question any more specific, so with that..
An rather pointless thing to hang on to in the grand scheme of things.
I did not mean you have to do EVERYTHING yourself but there are things that you can do it and they do not take a huge amount of effort.
Also, fun theory.
Why aren’t you baking cookies?
Good question. How’s it relevant, though?
You posted as though there was something wrong with your mother for not baking cookies. I admit that when I first read it, I assumed that she was buying cookies for both you and herself, which may have been a mistake.
Nah what I meant was a general observation that people often consume, but never make themselves.
They use software but have no idea how it’s made; they eat food but have no idea about the recipe; they listen to music but don’t know how to play even one instrument
Now that’s all fine if they DON’T want to do it. But at the same time it’s kind of what makes us human. It gives me a sense of being satisfied with the present rather than trying to self-improve, or something like that.
Many of those things aren’t even DIFFICULT yet plenty of people do not take even a small endeavor. It’s as low-hanging as a fruit can get.
I don’t know what you mean.
I suspect you already know the literal answer. People do make, on a very regular basis—it’s called a job. It’s more efficient for me to sell my code and buy music than for me to make my own music, for all the standard and obvious reasons given above (comparative advantage, division of labour, depth of market, etc). But you’ve been told this answer and keep asking, which makes me think you are asking something else.
Can you clarify what kind of answer you’re after?
What’s your LQ?
Why do I get the sense that you’re using LW solely as a marketing vehicle for your website?
I don’t know. That’s a good question. Is it because you think that I get paid to drive traffic to my blog? Or, do you think that I get more money when more people visit my blog? If so, how much money do you think that I get? And, how exactly do you think that I’m getting it?
There no reason to make up a new abbreviation for it. Especially one that isn’t opaque to other people.
For me it’s very hard to judge individual beliefs changes by those categories.
Replacing belief in God by replacing it with belief in evolution isn’t real progress. It sometimes leads to annoying people who think that evolution should dictate their moral choices. It’s quite okay for a 11 year old to have a simple view on the world and “believe” in evolution, but developing a rational way of dealing with the subject takes more.
Real progress is more structural. You don’t listen to evolution instead of God for moral guidance, but you stop having a single external source of moral guidance.
Most of the major things I believe today that I didn’t two years ago are in a form where I couldn’t express the belief in the language that I used two years ago.
Zero, I think. Is that a good thing or a bad thing?
What stands out to me about “epic changes of mind” is that the very scale of the event casts doubt on both sides of the change. By definition, you think whatever you currently believe is true, and the more wrenching the change, the more it will feel as if you have escaped from dreadful error into the pure light of truth. But a belief is not evidence of its own correctness, and the intensity of the feelings casts doubt wherever they are directed.
There is an excellent example in the case of John C. Wright, the science fiction writer, who was a staunch materialist atheist, and employed all his reason in arguing that philosophy. Then he had a heart attack, surgery, and a religious vision, following which he employs his reason in arguing with equal verve for Catholicism. Rationalists will say that something broke in his brain and led him to a state of inescapable error and delusion. He and his co-religionists say that God performed a miracle in saving him from error and delusion, that he might be a soldier of reason for the Christian faith.
If reason can be so readily practised in diametrically opposite directions by the same person, should we accept a conclusion merely because of a formidable argument? Should we believe our beliefs? If not, what should we do about them anyway?
I don’t something in the brain needs to break to have a religious vision. The problem seems rather that the person completely overrates a single experience. It’s very worthwhile to rest your beliefs on many distinct strings of evidence.
I would observe that, in general, the circumstances which lead people to change their mind from believing in God to not, are not similar circumstances that lead them to change from not believing to believing. So even if the size of the change is the same, they may not happen for similar reasons.
0 could be a bad thing if either one of the following is true...
It reflects the fact that you haven’t considered a lot of evidence.
You ignored/rationalized substantial evidence that contradicted your fundamental beliefs.
If somebody has had one or more true linvoids, then I can’t accuse them of always ignoring evidence that contradicts their fundamental beliefs.
Very generally speaking… I’m guessing that people with higher linvoids consider a lot of evidence and have a tendency to adjust their beliefs to fit the evidence and not the other way around.
That being said, if I had been raised as a pragmatarian atheist rather than a liberal christian… then perhaps my LQ would be 0 as well. The available evidence would have confirmed, rather than contradicted, my beliefs.
Your example is funny in how it mirrors a comment that I recently posted on this blog entry… Davis on AI capability and motivation. I don’t know if your example counts as a linvoid. Neither of my linvoids were caused by any single event. They were the result of a lot of digging. They sure weren’t epiphanies. This doesn’t mean that a significant event can’t provide a preponderance of evidence… it just means that I’d be a bit hesitant to label any resulting shift in belief as a linvoid.
Anyways, this is my first real attempt at analysis. It just seems like a truism that the more evidence somebody considers… the more likely they are to encounter evidence that contradicts their fundamental beliefs. It also seems like a truism that no two people are going to be equally able to adjust their fundamental beliefs accordingly.
One thing I notice about myself is that I always endeavor to “consume” evidence that my “opponents” throw at me. This is how I became just as familiar with the best anarcho-capitalist arguments for eliminating government as I am with the best liberal arguments for not eliminating governments. If I had only consumed evidence that supported my belief in limited government libertarianism… then I never would have had my second linvoid. So it’s not just the quantity of evidence considered… it’s the quality of evidence that one considers. And by “quality” I mean evidence that challenges your belief.
I posted this survey in a few different forums and the results have been disappointing. You’re really the only person who actually responded in a way that I had hoped for! Most other people either didn’t grasp what I was after… or they did grasp it but preferred not to provide it. In retrospect… perhaps a large part of the problem stems from how I worded it. No surprise there!
I’d be really interested to see how people on this forum would respond to a much better worded survey. Unfortunately… I’m not so great with words and my karma is too low to post articles. If you’d be interested in the results as well… please feel free to improve on my attempt. I’d of course be happy to provide any feedback on any drafts.