Open thread, May 17-31 2013
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
- What can we learn from freemasonry? by 24 Nov 2013 15:18 UTC; 17 points) (
- 16 Dec 2013 4:26 UTC; 5 points) 's comment on Open Thread, April 1-15, 2013 by (
Hey, not sure where else to put this:
I just logged back on after a brief absence from the site (a few days) to find I seem to have been genuinely karmassassinated. As far as I can tell, every comment I ever made has been downvoted, which was apparently enough to put me from 1200+ karma to −80 (although oddly, the “last 30 days” thingy claims I only got −148 karma; maybe it’s maxed out?)
I understand it’s possible to check who downvoted comments? It was probably a generically-named sockpupuppet, I guess, but still.
EDIT: great, I apparently need to wait 8 minutes because I already commented. Did not know low karma did that.
Wow! That’s the first serious karmassassination I have become aware of. People do pissweak karma assassinations every now and again but I haven’t seen a serious 1,300+ kill. I’m frankly surprised anyone bothered.
The last 30 days measure is only for votes on comments that were written in the last 30 days, not votes received in the last 30 days. The initial implementation included all votes received but had the effect that certain users could sometimes stay on the Top Contributers, 30 days list even if we didn’t log in.
Possible, sure. I’m not sure how convenient it is. It may require direct database access.
Now usually when I see people complain about this I think “Working as intended! Downvote!”. But this is a clear exception. Assassination is certainly not intended. I hope someone is able to sort this out for you.
The “last 30 days” score gives you a score for posts/comments that you submitted in the last 30 days, not the downvotes/upvotes you received in that duration. So if people downvoted comments of yours from further back, it wouldn’t be counted.
Um, can any administrator types chime in on this? Apparently it must have been an established user, so it would be nice to find out who it is so they can at least get mocked for taking the Karma system so seriously (very passé).
Also, y’know, all those features that prevent passing trolls from annoying people too much are also applying to me, so if it’s possible to reverse this that would be nice too.
EDIT: bump bump.
The moderators might not read the open thread, but I’m sure any of them would be happy for you to PM them about this.
Funnily enough, I just did. Thanks for the list, though!
What did they say?
I hadn’t gotten any replies yet, but the ones I’ve gotten back so far were people saying they’re not a “high enough” mod to access those privileges. Strictly spambot banhammers so far :(
I understand that there is a limit on downvoting—it doesn’t cost karma, but it’s only possible up to the limit of one’s karma, or something like that. If so, a sockpuppet wouldn’t have enough karma. It must have been an account with a real presence here, either bingeing on OCD or using an automated script.
Ah, excellent point. Now I’m even more curious about who the culprit might be.
… in other news, I apparently need “1164 more points” to downvote a comment.
Oooh, now you can calculate exactly how many downvotes you have given out.
I seem to recall that it is (or was last I heard) limited to 4 times current karma.
The ideal outcome here may be if a PTB banned the offender, pumped Muga’s karma back up then removed all traces of this conversation. This is somewhat too close to an instruction manual and primer for saboteurs.
I think it’s much higher than that, like 4 times one’s karma or even more so (which I find rather ludicrously high). Something like that would mean that a mere 300 karma (extremely easy to achieve by saying nothing actually worthwhile) would suffice to downvote another person 1200 times..
It’s amusing that someone would care enough about LessWrong karma to view it as worth this sort of effort. As character assassination and revenge goes, that’s pretty weak sauce. I suspect they got more disutility from wasting their time downvoting than you did by actually losing the karma.
A possible solution would be that one be required to solve a CAPTCHA before casting a vote if one has cast more than 100 votes or more votes than one’s last-30-days net karma (whichever is less) in the past 24 hours.
Wow. Do you have any idea what might have inspired this? Like were there any controversial posts you made before this happened?
Being karmassassinated wasn’t something I was expecting, certainly; I don’t recall being unusually controversial beforehand. In fact, I hadn’t logged in for the past few days. But I was, statistically, probably arguing with around three people at the time.
I wouldn’t want to hazard a guess as to the precise individual, though; it’s quite possible someone’s just been increasingly annoyed at me without saying anything and finally decided I had to be taught a lesson. (Good luck with that, culprit, if you’re reading this; anonymous downvotes are not the most efficient means of communication.)
In a recent conflict with someone (who seemed to be mad at me for no reason I could agree to), I’ve tried two strategies consecutively: reasonable discussion & mediation techniques, and rage fits (I’ve basically faked being really mad and upset at them to see what would happen; I’m sorry, I know, I was being a manipulative bastard). My faith in humanity took a hit (even though it shouldn’t have) after seeing that this particular person was basically immune to logos but very readily respondent to pathos.
So just a little reminder that may or may not be redundant among here: don’t do this. Don’t give more of a chance to the person who screams and acts crazy than to the person who tries to work things out with you the calm, mature way. It’s exactly the wrong way to respond if you want to incentivize rational behavior on the part of the other party. The message is basically “I won’t listen to any attempt at reasonable discussion, but try going hysterical on me, that one has good odds of success”, thereby earning yourself more hissy fits in the future. And especially don’t do this as parents, to your kids.
(I don’t know why I’m saying this here, it may go without saying for a smart bunch of people like you. Perhaps I’m temporarily under the impression that it is not obvious to everybody how astonishingly stupid it is to be more convinced by pathos than by logos, just because it wasn’t obvious to my IQ<95 acquaintance.)
I think that’s the kind of a thing that most people know in principle, but which is very hard to actually stick to when you have a raging hissy person in front of you, so it’s good to be reminded of it every now and then.
Yes, indeed; that’s one big part of it, the tendency to give in to hysterical behavior because it’s unpleasant to be subject to it and for a moment you put aside whatever conflict you have with the person in order to get them to calm down; you don’t feel that pressure when you’re engaged calmly and rationally. That’s something that affects people of all levels of intelligence and rationality.
When expressing concerns about the target audience of this message, I was perhaps focusing on the other half of it, the one which is skewed by intelligence/rationality: actually reacting well to reasonable discussions. So, on one hand, you need to not respond to pathos, and on the other you need to do respond to logos. I don’t imagine LWers struggling with that part of the process, because we like nice yummy reasoning and arguments. Less cerebral people, as I found, may simply not respond as expected to “what-if”s and “my-side-of-the-pond”s and “consider-the-possibility”s; they don’t speak the rationalist language, and as such don’t recognize these attempts at mediation as invitations to switch from “stick-to-your-own-guns” mode to “let’s-debate-this” mode. (Pardon the Buffy Speak.)
It has this Prisonners’ Dilemma aspect: It is better for me to avoid conflicts with angry people… but I would prefer other people to reward calm behavior and not reward anger.
It is worth explicitly noting that this heuristic applies when, and to the extent that, influence over future behavior of the other person or witnesses is the dominant consideration. In a once-off encounter with a street thug or a bank robber different considerations apply. It is not always your responsibility to ‘train’ people.
Being angry is a signal that you’re willing to back up your disagreement with consequences of some sort, whether it’s violence or a lost friendship. It’s also a signal, commensurate with the degree to which it is embarrassing, that this is highly important to you. Why, precisely, is it irrational to respond to this? Did evolution prime us to respond to it because it thought it would be funny? It is, indeed, not obvious to me (though perhaps I have low IQ) that it is astonishingly stupid to be more convinced (behaviorally) by pathos than logos; behavioral reinforcement is but one concern among many, and whose value fluctuates in accordance with how many interactions you expect to have with this person, whether they are physically larger than you, &c. And the persuasiveness of logos, obviously, can rather depend on the quality of the logos. Maybe your logos isn’t as good as you think it is? You apparently weren’t able to discern why they were upset with you in the first place, which certainly would have placed a damper on your ability to articulate convincing reasons why they should not.
Being angry signals lots of things—and if I desire less angry reaction, I need to figure out the function of the anger in this particular context.
Dahlen’s points seems to be that in the ordinary social context, anger tends to function as an attention seeking behavior, not a conflict resolution behavior. In other words, most anger is trying to yank someone’s chain. If that is the case, then responding to the anger with more anger is not consistent with having a goal of reducing the amount of anger directed at oneself.
Your assertion that anger reactions can have only one function seems likely to be false—and not a charitable or steelman reading of Dahlen’s post.
Look. The basic assumption here is that people would rather not be the targets of a hysterical person’s fits, that it’s unpleasant to them, and if there were anything they could do to discourage it, they would do it. Another assumption is that people remember which strategy worked on a certain person when they tried to achieve a certain goal in their interaction with them. Therefore, if someone wants to get you to stop being angry at them, and they throw you a hissy fit, and it works, and when they tried to reasonably talk things through with you it didn’t work, then they’ll remember that the best way to get you to stop being angry at them is to throw you a hissy fit. The next time they’ll want you to stop being angry at them, they’ll probably throw you another. You don’t want this. You’re uncomfortable when they do that. Therefore the rational thing to do is not to reinforce that sort of behavior, and reinforce instead the behavior that makes you feel comfortable.
I didn’t say anything about the rationality of responding to anger per se. I just said that reinforcing a behavior you don’t want to be subject to is irrational (and I thought any audience could agree with me on that) and that this particular case belongs to that class of irrational things to do.
Why, yes. I earnestly believe that evolution has a sense of humour which influences its “decisions” regarding what sorts of behavioral tendencies to implement in humans.
It’s disingenuous to suggest an answer to your question which you expect no reasonable person to give.
Possibly, but I am not very tempted to fault the quality of my logos for the failure of my attempt at mediation, since the obstacle it had to overcome was of the kind “I don’t want to listen to you. (I want to indulge in my anger.)”. The only response that the other person would accept of me was to shut up, admit to not quite qualifying as a human being because of my moral faults, feel horrible about it and leave the room. I am inclined to believe that moderately unpersuasive arguments don’t block the way towards eventual reconciliation quite as much as that kind of attitude.
I was very much able to discern what they were angry about. I just said I couldn’t agree to their reasons, i.e. that I don’t believe their anger was in the least bit justified. So, this ruled out the possibility of internalizing their accusations and feeling guilty.
Stop this. Seriously.
Holy shit is this the wrong way to try to reason someone out of being angry. It’s like, the exact opposite of how to talk a person down.
It’s called a rhetorical question.
It is called a rhetorical question by people who want to frame the matter in a certain way, and de-emphasize the disingenuous aspect of it: Oligopsony said something witty, props to him for that, we should appreciate good rhetoric (and you’re a humourless curmudgeon if you disagree). Really, what was your point—so what if it may belong to the category of rhetorical questions? That is no reason for me to judge it more favourably.
Also, that’s a very condescending way to make your point, it has this connotation of “Ah, but you lack the proper term for it; here, let me illuminate you with my objective definition.” Thanks, but no thanks.
It’s a connotatively fallacious rhetorical question. Your “arguments expressed indirectly should not be rejected conditional on (lack of) merit” heuristic is flawed.
As opposed to what? AFAICT, questions whose straight reading isn’t implausible aren’t rhetorical question.
The intended meaning of “Did evolution prime us to respond to it because it thought it would be funny?”, IIUC, is ‘obviously, evolution didn’t prime us to respond to it because it thought it would be funny’ (which seems correct to me), with the implication that we respond to that behaviour for a different reason, in a context where Oligopsony was mentioning or alluding to a few plausible candidate reasons for that.
As opposed to a rhetorical question which conveys a point as valid as implied. Obviously. Neither the argument implied by the original question nor the one you have made here are good arguments. Phrasing them as rhetorical questions doesn’t make up for that.
I took the argument implied by the original question to be “Humans respond to pathos in such-and-such way; humans don’t respond to pathos in such-and-such way because evolution found it funny; therefore, humans respond to pathos in such-and-such way for some other reason. Possible such reasons include this, this and this.” Did you take it to be something else?
Stop what? I haven’t the faintest idea what my IQ is, and you proposed low IQ as a reason for incomprehension in this instance. Why throw out a perfectly reasonable hypothesis?
Right. Stop. Just stop. I can see right through what you’re doing now.
It wasn’t a “perfectly reasonable hypothesis”, it was meant to reflect bad on me; it was an oblique accusation that I broke the social norm of not calling people stupid, or not arrogantly believing everybody who disagrees with me to be stupid. Of course I don’t believe that you, or anybody smart enough to be on LW, would ever give serious consideration to the hypothesis that they’re really, truly, honest-to-God dumb; no, you’re a bunch of reasonably smart guys that are aware that they’re smart. Of course that I chose the other interpretation of your words, the one that is in line with your interests in this discussion, the one that doesn’t conflict with the fact that people tend to maintain a flattering image of themselves, especially when facing people they disagree with, the one that is consistent with the kind of attitude you maintained towards me during this discussion—the one that assumes bad faith on your part. So no, you can’t just go around now and say that, oh, no, it was totally sincere and innocent.
As for the big question of the story—do I believe one has to be a dumbass like this acquaintance of mine to disagree with me on this? Of course not—predictably. I wasn’t surprised that they (the acquaintance) didn’t see it because, take my word for it, they just weren’t blessed with great intelligence. If, on the other hand, I see someone on here disagreeing with me on this, I explain it to myself this way: perhaps they misunderstood, or perhaps they reacted badly to one part of my post and consistency compelled them to react badly to the rest, or maybe even (but this is unlikely) I am missing something. But the hypothesis that I just ran into a complete idiot doesn’t cross my mind. And I’m writing this just so that I don’t have to explain myself again.
That was tiresome. Going through the intricacies of interpersonal affairs always is. Please, do me a favour and next time we talk, do your part on cutting the micropolitics to a minimum; the amount of noise that a non-neutral reply generates is ridiculous.
Yeah, that captcha is a stumper.
Because that is the biggest barrier to new people joining LW.
The biggest barrier that has anything to do with cleverness? Sure.
The biggest barrier to joining LW all right, but not the biggest barrier to staying on LW long enough to get more than 1000 karma points.
Oh, I’m sure if I keep on my current kick I can dip below a kilokarma.
That would still not be good evidence that you have a low IQ, rather than just being a dick. Hanlon’s razor only goes so far.
“Rather” my butt; there’s an incredibly obvious rude reply I could have made, and would have, had I the minimal intelligence to realize it.
[Godfuckingdamnit, this supporting response is an experiment in social dynamics:. Will LW ascribe any game-theoretical relevance to this here anecdotal data of two comrades sticking together in the face of negative karma? Or is it all part of a larger plot I’m weaving?]
[:comradefist:]
Don’t you? If you’re a human, it’s almost certainly somewhere between 10 and 190; if you made it through high school, it’s very likely over 70; if you have a university degree, it’s probably over 90; if you haven’t won a Nobel Prize or similar, it’s probably below 160; must… resist… the temptation of making examples using the words “black” or “Jewish”; and so on.
Maybe you can call in Gwern to measure my skull shape and really narrow it down.
The best way to deal with conflicts is to assume that the other person has a good reason for being mad at you, and to figure it out. Anger is costly signalling, so they’re usually angry about something. When they’re mad at you for something that you think is wrong, arguing the point is only going to raise the tension level. Instead of denying it or defending yourself (end result: other person gets madder), try to go down a level of abstraction and talk about why they think the things they are telling you.
The best case scenario is that they run out of steam and you figure out a course of action that doesn’t push their buttons. The worst case scenario is that they’re basically not worth associating with ever in a cheap way.
Thought I would repeat something I recently posted buried deep in a digressionary comment thread of an old post:
http://pss.sagepub.com/content/14/6/623.short
http://dl.dropboxusercontent.com/u/67168735/heritability%20of%20iq.pdf
“Socioeconomic status modifies heritability of IQ in young children”.
To make a long story short, this analysis of a large cohort of children assigned a ‘socioeconomic status indicator’ to each family they were following from 0 to 100 based on a large number of factors. They found that the heritability of IQ was a VERY strong positive function of socioenomic status. At the bottom, they think less than 5% of IQ variation is moderated by genetics. At the top of the scale, over 80%.
Obvious interpretation: low socioeconomic status masks genetic predisposition. Alternative restatement: high status environments allow previously cryptic variation to show itself. Low status populations are too genetically diverse for there to be a common factor that doesn’t vary between any of them.
This is, of course, exactly the kind of result that you would expect to get given the way that heritability is defined. When you make environment more uniform in terms of quality, you drive up the heritability, and vice versa.
The strength of the effect is still interesting, though.
Doesn’t the Turkheimer et al. result suggest that equalizing environments can drive heritability up or down, depending on how one does it? It’s as if the norms of reaction converge with decreasing SES, whereas the usual heritability analysis implicitly assumes parallel norms of reaction.
Is the total amount of variation the same in different populations? How does the magnitude of the variation contributed by genetics compare?
(That is to say, one percentage may be very different from another percentage.)
Presumably you’re asking about variation as a function of SES...? If so, one can eyeball an answer from the top row of figure 2. At the bottom of the SES scale, there’re a bit under 500 units (IQ points squared?) of full scale IQ variance (none from additive genetic effects, a bit under 300 units from common environment, and a bit under 200 units from nonshared environment). At the top of the SES scale, there’re about 300 units of variance (essentially all coming from additive genetic variance). Note that these numbers all come with wide error bars.
Yep, that’s what I was asking about. Thanks!
I put a HBD chick into play.
A surprising find in the light urban fantasy novel (Blood Engines): One of the sorcerers in the story starts referring to Nick Bostrom. It describes his philosophical pedigree at Oxford and gives a rather detailed explanation of the Simulation Hypothesis, Bostrom’s trilemma and various implications thereof. Decidedly not what I was expecting given the genre.
Would you recommend the novel in general?
Yes (as light entertainment). It is relaxing to read about a protagonist that is (mostly) sane rather than being constantly bogged down by completely absurdly impractical deontological morals. While not perfect she is one of the most rational protagonists I have read about. In fact, she is probably more rational (in particular instrumentally rational) than MoR!Harry, albeit less clever.
I wouldn’t recommend the series for those that use fiction as ‘morality porn’ where doing the (naive and dubious) ‘right thing’ somehow miraculously works despite all the evidence that it should fail. For that matter I also wouldn’t recommend it as actual porn, which much of the genre tends towards. The female protagonist has better things to do than acquire and manage a harem containing at least one moderate-to-high status ‘good’ mate and a series of progressively more aggressive, dominant, abusive and superhuman monsters that are somehow infatuated with her.
The only books in the genre (that I’ve read) where this happens are the Anita Blake book. Most of the others AFAICT tend to involve the female protagonist being chased by several attractive men, and ending up with her One True Love.
Others that spring to mind off the top of my head are True Blood (The Southern Vampire Mysteries), Kate Daniels, The Red-Haired Stepchild and Mercy Thompson.
If I knew nothing about a series of books other than that it is urban fantasy I would take bets at approximately even odds that it matches “kick-ass female protagonist in a love triangle (or larger polygon)”.
The Anita Blake books are the only ones that I’ve read where the character is actually sleeping with most of the monsters chasing her. Mercy Thompson and Sookie Stackhouse have men chasing them, but they only tend to date one at once: they don’t have actual harems.
Sometimes the relationships involve sex, sometimes they do not. Enough involve sleeping with the monsters that it would be hard to collect a large sample without encountering them.
As for Anita Blake, over the last few weeks I read the first six. So far she hasn’t managed to have sex with any of her harem. She’s too busy being a prudish Christian necromancer who thinks a lot about how her men/monsters make her wet. I understand there is a transition at some point in the direction of raw erotica. I’m not sure I’ll get that far.
Worse, there’s a transition on the direction of dreadful writing.
I’m kind of hoping there is a transition in the direction of Richard being dead. Nobody that naive in his position should live.
I stopped reading because I couldn’t take the pain anymore, so I don’t know.
Book 10 is where the characters start to have lots of sex I think.
Hi all,
Would this be the best place to introduce onesself?
I’m 24, male, and resident in Bristol, England. I’m currently studying (read: procrastinating) for a master’s degree in computer science, and my undergrad degree was in English lit and mathematics.
I’ve been lurking LW on and off for some twelve or eighteen months, but I held off from registering for a while because I feel like my interests overlap only somewhat with those of the Lesswrong community. For example, I’m not especially interested in AI friendliness or existential risk; on the other hand, I am interested in effective altruism, ways to work more effectively and procrastinate less, and general bias-awareness, all of which seem to be staple topics here.
Other things i care and think a lot about include music and books, gender and sexuality. (Incidentally, I thought the Lesswrong Women series that ran recently was in general very good.) Politically speaking, I’m more left-leaning than libertarian, which seems like it might be the default political stance on LW (but which, incidentally, barely seems to exist out here in the old world). In brief, I think I’m more of what the North Americans would call a ‘liberal arts’ type by temperament, but with kind of a rational/scientific bent, too. In any case, I hope to make a decent contribution here.
That would actually be the welcome thread, but this is a close second best.
This is not uncommon in the LW community. Artificial intelligence, personal rationality, effective altruism are the three big topics here, but many people are interested only in one or two of them.
It’s your lucky day.
Yeah, I saw that! Really out-there coincidence, at least from my point of view.
You really did get lucky. In the Tampa Bay area, we’ve only had one meetup; that was years ago, when Michael Vassar was traveling through.
From the latest census
LW used to be Libertarian dominated, but as it’s grown it has gained more and more progressives.
That’s interesting. I wonder what made me think that. Perhaps it was from reading plenty of old threads, or perhaps libertarian types are a more vocal minority. It’s definitely a stronger part of the zeitgeist here than I’m used to, though.
Probably an element of both. Two other aspects I’ve noticed myself:
Microeconomics gets invoked relatively often in posts & discussions here, and in my experience the use of microeconomic arguments by non-economists is (slight) evidence of having (right-)libertarian politics.
The modal attitude towards involvement in mainstream politics here is sceptical, and I associate that attitude with less mainstream ideologies like libertarianism (and Marxism and so on).
I wish the next census to taboo “socialism”. In my experience people use this word to describe three rather different things.
a) An imaginary post-scarcity utopia where money is not necessary, work is voluntary, and all people are educated to love each other.
b) Sweden—either the real one, or its idealized imaginary version.
c) The political and economical system led by Communist parties in the 20th century.
And I hope the majority of those people meant something like (a) and (b), because honestly, I can’t imagine how (c) could be related to rationality or truth-seeking or altruism or ethics.
“Socialist” was tabooed on the census, as were the other political orientations. The text of the option was:
I picked “Socialist” on this basis. There was a separate option for Soviet-style communism, which 0.7% of respondents picked.
Yeah, that’s again that thing… people in different parts of the world using the same word to mean something different.
I grew up in a country whose official name was “Czechoslovakian socialist republic”. We were a satellite to a country called “Union of council socialist republics”. The political/economical regime we had was officially called “socialism”. -- In my country, almost everyone means this when they use the word “socialism”.
So talking with other people who use the word “socialism” to mean something else, feels kind of surreal. It’s like… are they from other planet or what? Are they not aware that we had “socialist” countries for decades in this part of the world? Or are they in a complete denial about what happened (all the murders, torture, fear, and censorship) in those countries? (Then perhaps it is my moral duty to educate them.) Oh no, they are just using the same word to mean something completely different. At least I hope that all of them do.
To illustrate my level of confusion, just imagine that you meet a group of young people from other side of the planet, identifying themselves as “nazis”. When you ask them what exactly they mean using this word, they tell you it means a lifestyle based on “My Little Pony”. You are like: WTF?!! Then you ask them whether they know something about European history, and what is their opinion about the guys who called themselves nazis, like Hitler and his friends. They patiently explain to you that Hitler and his friends were definitely not true nazis, because, you know, they were completely unlike the “My Little Pony”. Isn’t that obvious? Therefore it would be more proper to call Hitler an anti-nazi. (And later, you see that 27% of LW readers self-identify as nazis. How horrifying would that feel?)
I tend to think of “socialism” as an umbrella term that includes a number of different concrete political systems, all broadly committed to some form of state-directed egalitarianism (or at least more committed than traditional liberalism). Specific socialist doctrines range from anarchism to Soviet-style communism to Scandinavian-style social democracy. Historically, I think most of these systems regarded the socialist state as transitory (or non-existent, in the case of anarchism), paving the way for a class-less utopia where all means of production are held in common, but I think few self-described socialists (excepting perhaps hardcore communists) would see this transition as plausible any more. I certainly don’t.
So I wouldn’t say that I mean something different when I say “socialism” than you do, nor would I say that communism isn’t true socialism. I would say that we are both talking about socialist systems, but different types of socialist systems. There is something that communism and social democracy have in common, which makes them both forms of socialism, but I doubt that that common core includes what you find most reprehensible about communist regimes.
I agree that something like “social democrat” would be a less confusing label than “socialist” for the next census, in order to distinguish the particular variety of socialism that was intended.
Yeah, that makes sense. The utopia, the Scandinavian-style social democracy, the Soviet-style communisms all belong to a greater “socialism” superset, just like Friendly AI and the paperclip maximizer both belong to an “artificial intelligence” superset.
And that is also a reason why someone telling “we are ready to build an artificial intelligence tomorrow”, without providing any more details, would make some people here scared. Not because all AIs are wrong; not because we don’t want a kind of AI here; not because we know that their AI would be unfriendly. But simply because the fact that they didn’t specify the details is an evidence that they didn’t think about the details, and thus they are likely to build an unfriendly AI without actually wanting to. Because the prior probability of unfriendly AI is greater than the prior probability of a friendly AI, so if you just blindly hit a point within the “artificial intelligence” space, it is likely to go wrong.
In a similar way, I am concerned that people who want utopia-socialism don’t pay much attention to the details (my evidence is that they don’t find the details worth mentioning), and are probably not aware (or disagree) with my opinion that it is much easier to create a Soviet-style communism than a stable Friendly socialism. I mean, even if your starting group of revolutionaries all have good intentions, you will probably get infiltrated and removed from power by some power-hungry psychopaths, because… that is what homo sapiens usually does. You know, mindkilling, corrupted hardware, conjunction fallacy (all the things that must succeed to build the utopia), and so on. -- And the different opinions may be caused by some people having first-hand experience of the Soviet-style communism (especially with the aspect that many well-meaning people created the system and supported its running, despite the horrible things that happened; partially because the system made it illegal to share information about those horrible things, while supported spreading the good news, whether real or imaginary), and other people not having this experience (but hearing some of the good news).
Yes, please. We might call these “post-scarcity socialism”, “welfare-state socialism” (or “welfare liberalism” to Americans), and “Communist Party socialism”.
(Of course, there is the argument that we could be living in a post-scarcity society today if it weren’t for the increasing fraction of wealth held by the ultra-rich.)
There is a very old Soviet joke about the what would happen if socialism (of the Communist Party rule variety) were to be established in the Sahara. The answer is that after a couple of years a severe shortage of sand would develop...
I am unaware of a serious version of such an argument.
(For some value of “post-scarcity” smaller than an American might have in mind.)
I know some hardcore C’ers in real life who are absolutely convinced that centrally-planned Marxist/Leninist Communism is a great idea, and they’re sure we can get the kinks out if we just give it another shot.
C’ers?
People who’d choose option c) from Viliam Bur’s list, I imagine.
As in, line up all those against and shoot them?
Do these people see themselves as among the organisers of such a system, or the organised?
You also know some people who desperately need a course in computational complexity. Markets aren’t perfect, of course, but good luck trying to centrally compute distribution of resources.
The various hardness results for economic problems (e.g. computing Nash equilibriums is PPAD-complete) cuts equally much against free markets as against central planning. If a central agent can’t solve a given problem within cosmological time scales, then neither can a few billion distributed agents.
The free market doesn’t solve the central planning problem. It reliably climbs local hills in the solution space by putting more decision-making ability (money) in the hands of those who make better decisions (make more money).
Free market, in a typical situation, has the advantage of having more raw computing power, simply because every person uses their own brain to optimize for themselves. And (some people believe that) the benefits of this additional computing power overweight the costs of not having this computing power coordinated, thus wasting a part of it. (It also has the advantage of having local information, having access to this information before it was filtered by political processing, etc.)
But technically, we are speaking about a linear increase in the computing power here. Like, a few million people, instead of a few dozens of government experts. Computational complexity typically does not speak about linear factors. -- Thus you have sinned against the narrow meaning of “computational complexity”. I believe the downvotes reflect this.
That doesn’t sound like a terribly good argument—the fact that it would take O(something big) time to compute something exactly isn’t terribly important if you can compute a decent approximation in O(something small) time.
(I’m not a Communist, FWIW.)
My model of how to approximate the optimum solution—specifically, break it up into tiny pieces and keep track of price-analogues by means of a real number valued function of the industrial output being managed—looks an awful lot like a free market with really weird labels for everything. It goes up to and includes closing sub-units that detract from overall optimization (read: unprofitable firms).
I am new here, and I am not sure what to do.
I think most LWers would advise you to read the Sequences, but I reckon you could get 80% of the value from doing so by reading two of the following four books (which would be much less time consuming):
Harry Potter and the Methods of Rationality by Eliezer Yudkowsky
Gödel, Escher, Bach: An Eternal Golden Braid by Douglas Hofstadter
Good and Real: Demystifying Paradoxes from Physics to Ethics by Gary Drescher
Thinking, Fast and Slow by Daniel Kahneman
HPMOR is like the sequneces but with way less explanations and way more narrative. If you want to save time then reading HPMOR instead of the sequences is the opposite of what you should do.
The books are good but there are multiple sequences which have maybe 5% overlap tops with any of those books. Not saying that they aren’t good recommendations—I am saying that your claim is invalid.
So yeah, read the sequences. If you need motivation to read the sequences read HPMOR.
I’m definitely not claiming that two of these books will cover 80% of the subject matter of the Sequences. My claim was about the value. Of course, value is subjective, so my recommendation relies on analogical reasoning (which is quite weak, even for induction), specifically that Epsilon725 is like me in relevant respects at the time I first encountered OB/LW.
Disagree. I read the first two before I read the sequences and I definitely learned an immense amount from reading the sequences that I did not already know.
Yes, but how valuable were those additional things you learned? What percentage would you estimate, if not 80%?
Easily the most valuable thing I’ve read in my life, I would say. (I am also eighteen and don’t read a lot.)
Not sure what to do here, or not sure what to do more generally?
Welcome! I hope you find this community as useful as I have.
—As others have suggested, reading the sequences is extremely useful and I wholeheartedly recommend it. However, it’s also really long. If you want to start with something less huge, there’s some good stuff here.
—Consider saying more about yourself here or in this thread.
—Where do you live? There might be an in-person meetup nearby.
Why are you here? Describe in the Welcome thread to get feedback.
Note: “Why are you here, on LessWrong?” was not a rhetorical question :D
Read the Sequences.
How did you find the site?
Welcome! I hope you find this community as useful as I have.
—As others have suggested, reading the sequences is extremely useful and I wholeheartedly recommend it. However, it’s also really long. If you want to start with something less huge, there’s some good stuff http://www.lesswrong.com/about/. —Consider saying more about yourself here or in this thread. —Where do you live? There might be an in-person http://lesswrong.com/meetups/ nearby.
New stuff is less helpful than you’d think it is. Start with the lesswrong Sequences (just google it, too lazy to make a hyperlink).
Here’s an interview Bill Gates gave on healthcare. It’s not directly rationality related, but it’s very good. I’d recommend reading all the way to the end. It’s especially good at the end.
http://www.washingtonpost.com/blogs/wonkblog/wp/2013/05/17/bill-gates-death-is-something-we-really-understand-extremely-well/
Very interesting interview—it had a few things that surprised me.
I’m planning on making a flashcard deck (for anki) with basic emergency scenarios (e.g. What should you do if you’re in a car that’s fallen into deep water? What should you do if you encounter a bear in the woods?). Does anybody know of some good sources about this kind of thing? I’m especially interested in data comparing frequency/mortality rates of different situations, so I can pick the most important topics to make cards about, and quantify how likely adding these cards to someone’s deck is to save their life.
Of course, I’ll share the deck if/when it’s completed.
The CDC publishes stats on causes of death. Motor vehicle accidents, accidental poisonings, and falls are the major accidental, non-disease causes of death.
To reduce the chance of a motor vehicle accident, do what the car insurance companies recommend. They have a big financial incentive to be right.
To avoid poisoning, don’t mess around with drugs. Overdoses of prescription painkillers and benzodiazepines (Valium) are common causes of poisoning; cocaine and heroin overdoses are also up there. It isn’t clear to me how many of these overdoses might be misreported as accidental to avoid talking about suicide, but don’t commit suicide either.
I have a couple of questions I’d like to ask you all. It’s research for something I’d like to do with my local meetup group, and I’d appreciate your help.
1) Can you name one or more activities or experiences you find to be depletary, in the sense of ego depletion, (i.e. something that fatigues you along some psychological axis the more you do it). If they exist, I would prefer examples that are personally salient to you.
2) Can you name one or more unreasonable demands you feel you are under, psychologically speaking? I’m thinking of cases which you might phrase “I’d like to (not) do X, but my brain just won’t let me”. As an example, if I get into a protracted internet argument, I feel like my brain compels me to rehash the argument over and over again. This feels like an unreasonable demand from my brain, and I would like to not be subject to it. Again, personally salient examples are especially welcome.
Thank you.
1) Official paperwork, especially if it’s the sort where a trivial error can be a major hassle, like tax forms. I have some fairly serious Ugh Fields around the latter.
2) I obsess over previous mistakes, even ones that were made months or years ago. I even obsess over hypothetical mistakes, where I could have screwed something up if I’d been a trifle less lucky or if my synapses had fired a little differently. The mental monologue in the first case runs something like goddamnit, I’m supposed to be better than that. In the second case it’s more of an abject horror at what Could Have Happened.
Upvote for meetups!
1) Job searching. It forces you to really size yourself up and compare yourself to everyone around you.
2) I really like your example. My own: feeling pressured to hang out with certain friends (because they’ll feel neglected). Rather than realize that friends that guilt you into seeing them need to be see LESS, my brain just makes me feel bad.
1) I find that interacting with other people face-to-face is mentally exhausting for me. A few hours or so of prolonged exposure is not so bad, but more than that and I have to exert noticeable effort to not be snappish and crabby with people.
2) I suffer from an unreasonable need to sit with my back to a wall, or some other solid structure, even within my own home.
1) Interacting with a lot of randoms when I’m not in a great mood. Example: I used to play a ton of magic, less these days. Consistently I would have some fun but as the day went on and I had to make small talk with more and more strangers most of whom were fairly lame, I would get sick of it and just want to stop.
1b) long lasting one on one interactions with people I’m not extremely close to. Even with friends, spending more than an hour or so with someone one on one gets to be very stressful for me.
1c) Cleaning or similar tasks, if not spaced out a lot.
For 1), spending time in the company of people who are not extremely close friends. Doing this can often have fun aspects, but it also feels very much like continually exerting willpower (to keep interacting, being polite and interested/interesting, being socially normal enough, etc) and after a prolonged period I just want to hide under a blanket by myself for a few days. (I don’t actually do that.)
For 2), I’d like to not think up and then keep rehashing pointless worries that are based on nothing very likely to be true or that much of a problem. For example, what if I screwed up the lab machine configuration on my last experiment session? Well, I probably didn’t, and even if I did it’s not actually the end of the world and I can’t do anything about it over the weekend. Won’t stop me fretting about it though.
Right now I think I do not want to have children. I’m 24. But I’m worried that if I plan not to have children, I’ll change my mind later. How will I know when my preferences are stable enough to plan around? At what age should I expect to not change my mind about wanting children?
I distinctly remember telling a friend when I was 24 that I didn’t think I would have children and didn’t particularly want any. She laughed at me and told me I couldn’t fight biology. I sneeringly informed her that I was master of my own existence and wouldn’t be pushed around by evolution.
By age 27 I had a child, which, by that time, I wanted very much.
I would say the stability of your preferences depends very much on your reasons for not wanting children. I find that abstract reasons such as “the world is overpopulated and I don’t want to contribute to that” or “my professional life would suffer for a few years” are very easily shrugged off, when it comes down to it. By far the most important factor is how your spouse and/or romantic partner feels about the issue, assuming you have or want one of those. So if you’re really serious, be sure to choose a mate who also doesn’t want kids, and be sure that they are just as firm in their convictions as you are.
Thanks for input. Did you start wanting the child before or after it was conceived?
Is there any information available about the latest age at which people are likely to change from not wanting children to wanting them?
Why is this a problem? Freedom of choice is a good thing to have. If you really want to commit future-you to the choices that now-you wants to make, there is a variety of surgical procedures that are moderately hard to impossible to reverse. But again, why do you want to give up future choice?
Future is uncertain. Your preferences will never be cast in stone and that’s a good thing. Whatever you plans are, they should take into account the fact that things will (not may) change.
I’m worried about getting a woman pregnant, and I’m not sure reversible birth control is effective enough.
I don’t think LW is a good source for male birth-control methods :-) but Google is your friend. You can start by looking at RISUG.
Except sometimes directly.
Irreversible birth control + frozen sperm?
Use nicotine patches and porn to change your sexual preferences?
My suggestion: find people who have young children right now, or are about to. Volunteer to help them out if they ever need any assistance, and see if you can observe what the daily routine with children is actually like. (If you had younger siblings, you may already know a bit of this, but seeing it from the view of someone who’s actually responsible for them is going to be quite different.) If you find good reasons for why you’d absolutely hate the experience, then that’s some evidence for your preferences not changing (though having your own child is quite different from helping with another person’s child), and if you realize that it doesn’t seem that bad after all, then your preferences have already shifted and you’ll know it.
My wife and I have decided we’re going to homeschool our son, almost five, for various reasons. What age do you think it would be appropriate to start rationality training, and how would you go about it? Are there any particularly kid-friendly resources on rationality that anyone can recommend? (The sequences are good for beginners, but they’re well above the level of a five year old).
You might want to look into the idea of unschooling.
Certainly before they reach five.
(1) Kids want parents to do stuff. Most parents rely on their authority and don’t give their kid what it wants, even if the kid is able to provide valid reasons. A good way to teach rationality is to avoid to rely on arguments by authority.
My father had the policy of always giving me the real reason when I asked for something whether or not I would be able to fully understand the answer. That taught me that it’s okay to get answers to questions that I don’t understand. It was a valuable lesson for me.
Dumbing things down and relying on authority for arguments are the two biggest things that parents do to avoid their children being rational.
(2) When it comes to giving out pocket money consider giving it out as betting money. Let’s say the allowance is $3 per week. Whenever the kid disagrees with you about a factual matter, he’s allowed to ask you for your odds.
So the kid thinks it’s raining. You don’t think so and are pretty certain. So you say you have 4:1 odds. The kid can bet $1 from his betting money. If he wins the bet than he get $4 in real coins from which he can buy something.
The pocket money is motivating so he will have a huge incentive to get good at having accurate confidence in his beliefs. After a while he might even give you some rationality lessons.
(3) I would think about teaching a five year old Anki for all occasions where he has to learn something.
What is the best existing evidence for unschooling?
To me it seems based on the premise that children, when left alone, become automatically strategic. Which is not a new idea; J.J.Rousseau already made this popular centuries ago (and provided some fictional evidence).
Here is an alternative hypothesis: Children outside of (elementary, high) school do on average significantly worse than their peers in schools, ceteris paribus. But there are also other factors beyond school contributing to education, which means that an intelligent child of educated parents who invest a lot of time and expose the child to their values can get better results by unschooling than an average child of average parents with average attention from parents gets by school.
Apples to oranges. The relevant questions are:
1) Does an intelligent child of educated parents who invest a lot of time and expose the child to their values get better results by unschooling than an intelligent child of educated parents who invest a lot of time and expose the child to their values gets by school?
2) Does an average child of average parents with average attention from their parents get better results by unschooling than an an average child of average parents with average attention from parents gets by school?
The answers to these two questions are not necessarily the same.
I completely agree that those two are the relevant questions. So is there a good evidence for either of them?
All I found was anecdotal evidence that intelligent children of educated parents seem to fare better with unschooling than average children at school. And we would both agree that such evidence is irrelevant. (At best, it is evidence that unschooling is not completely destructive. But the claims in favor of unschooling seem to be stronger than that.)
Now would be good? You can probably construct appropriate questions for a five-year-old based on the Eliezer’s version of the fundamental question of rationality: “Why do you believe what you believe?”. This can apply to a playground fight as much as to a big political issue.
I have attempted this with my daughter. “Is Father Christmas real?” “Yes!” “How do you know?” “Because [best friend] saw him!” “How do you know [best friend] is right?” “’Cos she is!” At this point I had exhausted a 5yo’s philosophical introspection.
In your position I’d be curious about her response to “Has [best friend] ever said things that turned out not to be true?”, but I’d also be worried about poisoning her relationship to her best friend in the process of asking.
[best friend] is also magical and has power over the weather, or at least making it sunny on a rainy day. Daughter’s mother and I have both attempted to gently stimulate skepticism on this point. (Said best friend has a somewhat troubled family life and I suspect is claiming to be magical to feel power over her life, so we’re happy to be gentle over this one.)
First, I got a more instrumental response from a 7yo on whether the tooth fairy is real: “As long as I find my dollar under the pillow, she is!”
Second, you were using an adult language with a small child. Asking instead what her friend saw in more detail, and discussing that instead could have been more illuminating. Or not.
Yeah, that’s a good idea. I was stuck in the idea of a set curriculum, but weaving it in wherever possible will probably help it stick better.
Pray tell. Or just tell, no praying required, that would be telling. Just prying. Required, I mean.
About 3. How old is your son again? What are you, a bad father? No worries, it may not yet be too late, if you wake him up and start now. Just ingrain the rationality training as an aspect of the way you interact with him, I go for the Socratic Method. Don’t set apart “rationality training” time (or are you planning to be irrational unless rationality is scheduled?!). Helping your kid develop mental models of others is my favorite. “Why is that person doing that, what does he want to achieve with that? What else could he do? Is what he doing the best way of reaching his goals? Yea, well done, now give the ice cream man his money, he’s looking at us weirdly.”
(Also keep in mind the “kids do as you do, not as you say” paradigm when interacting with others in the presence of your kid. Could lead to some strange conversations with the janitor, but aren’t they always. Strange, I mean.)
Edit: Tone for comedic purposes, here’s a special message to the downvoter(Edit:)s.
Downvoted entirely due to the edit.
It really boils down to the convergence of a few factors; he’s already learning a higher grade level than he’d be placed in by his age, he suffers from some hyperactivity issues, and, quite frankly, my wife and I think we can do a better job than the public system. Or at least my wife can; I’m not convinced of my abilities at a teacher yet.
Obviously I’m not planning to be irrational at any given moment, but I was originally stuck in the mindset of curriculum since that’s what we’ve been going with for math, reading, and science. This is probably a better idea, though.
Has anyone tried e-cigarettes as a method to quit smoking or at least ameliorate the effects of smoking?
I smoke about a pack or two a week (3 a day minimum, sometimes binging once a week) and would like to reduce that in order to increase my chances of living longer. Anyone have experience they can share?
After buying an e-cig, I never bought another pack of cigarettes. It has been roughly six weeks, I think. My consumption was slightly higher than yours.
Congratulations, by the way. You have successfully added years to your life, ceased to constantly stink, saved a lot of money, and retained the mental edge and social benefits of smoking nicotine. Instrumental rationality at its finest.
Thank you.
I bought my roommate an ecig and he hasn’t gone back after ~6 weeks.
Feel free to profit off the hours of research I did, this is what I bought: http://www.chinabuye.com/ego-t-travel-pack-metal-box-450mah-quit-smoking-usb-rechargeable-electronic-cigarette-e-cig-black-1
extra batt: http://www.chinabuye.com/450mah-capacity-ego-ce4s-battery-for-electronic-cigarette-black
The US distributors just slap their badge on these and double the price. you’ll need extra clearomizers (or one of the other cartridge types) + ejuice, but that is up to personal preference.
I know a group of four people who all used to smoke, then opened up an e-cigarette shop and quit smoking. They are all addicted to e-cigs now, but that’s a big improvement over where they were.
Have you tried making yourself throw up immediately after (or even better, while) smoking?
No, and if my experiences with drinking alcohol are any indication, it wouldn’t have an effect on cessation.
I’ve used them and liked the, but I never used normal cigarettes so don’t know how the experience compares.
If this evidence is of interest to you, I still have not bought any more packs since converting to electronic cigarettes. If you have not yet converted, I would highly recommend doing so. If you are interested, I will message you my apparatus and places to purchase it at.
Yes: http://scholar.google.com/scholar?q=%28%22e-cigarette%22+OR+%22electronic+cigarette%22+OR+%22electronic+nicotine%22+OR+%22personal+vaporizer%22%29+quitting+smoking
Rather than an e-cig, I currently occasionally use a portable vaporiser, into which I place hand-rolling tobacco.
It raises the raw tobacco to around 250*C, so nicotine is carried in gas and can be inhaled. Nothing is combusted. It’s slightly larger than a AA battery and it looks like this.
This gives more nicotine hit, much less lung cancer than smoking and uses cheaper consumable materials than e-cig (you can buy hand-rolling tobacco everywhere, need less than depleting possibly-expensive e-cig cartridges).
It is also likely that this vaporising gives more MAOI hit than e-cig, contributing to both the high and addictive properties. Wiki link
Subjectively, I enjoy this more than than a e-cig. I have actively promoted the use to this device to friends to try to get them off cigarettes.
I’m kinda starting to panic. (Warning: Wall-o-text follows.)
I don’t like giving out my age, but I was born in mid March 1988. That makes all of these much scarier:
I live with my parents
My physical activity consists almost entirely of pacing the house when everyone is asleep/not at home.
I have ~$180 in paypal, $700.08 in the bank, and $~130-150 in cash, and receive $~630 in disability benefits monthly. My student loan payments ar ~$860. All the loan stuff is hadled through my parents, who, in spite of my waiting until my dad was on the phone to tell the Teller my desired pin, seem to alternate between whether or not they use money from my account or their own to pay for things without warning.
Meanwhile, someone I went to kindergarten with taught my cousin’s seventh grade English class, and my best friend in elementary school(1) is the director of finance at the local university.
I avoid interacting with anyone else who lives in this house as much as possible. Now that local schools are on summer break, that means much more hiding in one room or another. Which means considerably less physical activity, and only eating things that can be quickly grabbed and taken elsewhere (rather than anything that requires preparation or utensiles). I am worried about my health enough without this.
I own property and a livable building nearby. I’d much rather live there than move to a city with actual sidewalks and public transportation and people I might be more likely to stand interacting with. This is most likely completely irrational (though I keep reminding myself that this means no rent and low utility costs as though that makes it any better an idea).
I don’t think my parents trust that I can actually do anything. I recently decided to go cross a non-busy street to where one parent was waiting in the parked vehicle to ask about lunch plans, while the other screamed that I would get run over. Notice, I’d been taught how to cross streets unaided annoyingly often since fourth grade by various mobility instructors. My interest in App Academy has been met with constant concerns over where I’ll stay/how I’ll eat/etc (The San Francisco facility, according to the interviewer I talked to, is livable, which was sufficient that I didn’t feel the need to ask any further questions at the time.). I’m basically expected to tell them if ever I need anything.
I am socially broken. I blame the constant “Don’t care what anyone thinks!” / “Say no to peer pressure!” / “Do what’s right and stand up for yourself!” / “College education! Seriously, it will magically dump the solutions to everything into your lap!” memes. Combine those with my poor vision, and I’m pretty sure the closest thing I had to a peer group is “people I annoyed the least”. Recently, my father declared in a moment of frustration at all the social activity my cousin was trying to have that he selected our current location because it was far away from everyone before all the recent construction. They basically create an atmosphere of “Getting involved with other people will get you in all kinds of trouble!” … then they turned around and started berating me for not being normal starting after seventh grade when the school started complaining. … Then sent me to summer camp, which pattern-matched to “You’re not submitting to peer pressure. Prepare to be reeducated.” so closely that… eh, this is probably where I stopped talking to them. This is also the age where my classmates started behaving less and less like people I wanted to associate with (because sex drugs profanity and insulting people for status were things I’d been told since birth to avoid. Except the sex part. I wasn’t told anything about that, so just kinda put it in the same category, since that’s what most people seemed to do.). So even though I failed socially for the first half of my life, the second half has been much worse. Actually, that’s definitely when I stopped talking to them, since that’s also when my already poor vision took a very steep decline, which I never told them about (Other things I never told them about include that time I found a giant lump in the inside of my thigh (I think it turned otu to be some kind of acne that trapped a lot of water, but it was pretty terrifying at the time), that time one of my toes was having such problems that I couldn’t do much walking for a while (they found out only because I was at college at the time and my braillist found out and told them), or that time I had horrible stomach problems (I only informed them when it got painful enough that I could only crawl into their room at 3AM in the worst pain I’ve ever felt). Keep in mind, they’re my only means of accessing a doctor.).
I’m happiest when I’m successfully working on something—or at least thinking about it productively. Occasionally awesome things (good music/fiction) help. Actually working on anything is an uphill battle, one not aided by the afore-mentioned summer break. Since I don’t prepare food in any involved way when anyone’s around (and have no cooking skills better than microwaving or making sandwiches), my food selection is limited. I once got my parents to buy an air filter for my dorm room at college, but they never paid attention when I reminded them that the filters need changing every 60 days.
The end result is me spending a lot of time miserable and probably unhealthy and only occasionally having bursts of enthusiasm strong enough to get around it (usually when people are away).
I’ve applied to App Academy, and get the impression that my chances of getting accepted are quite high. Since their locations are in San Francisco and New York, if I do attend, I might be able to benefit from annoying the rationalist communities there in person. I’m trying not to plan for this as an eventuality, though; the costs of travel and the $3000 downpayment have to come from somewhere (Remember that I only have ~$1000 that I can use, and I’ll lose SSI if ever I have more than $2000), and my independence skills leave everything to be desired. I’ve looked into local opportunities—there aren’t many jobs with online listings in my area that I could actually do, let alone ones I’d have any interest in (never mind the enhanced difficulty that I keep hearing about blind people having at getting employed, ADA or no ADA). The local bus system is… well, it exists, which is pretty much all I can say about it (it certainly doesn’t stop anywhere within walking distance of my current location, not that there are any sidewalks within walking distance). Cabs are expensive enough that I wouldn’t dare try taking more than one or two without a serious income boost. And this all still runs into the big problem: If I try to do anything on my own and my parents find out, they will say something. I doubt it’d be anything negative, but I have such a strong desire to avoid that sort of conversation that it puts a huge cap on what I’m willing to do with them within 100 miles. To the extent that in the extremely unlikely event that I somehow wind up with a girlfriend, they’re the last people in the universe I’d want to know. They’re not awful people, or anything; they buy the food and pay the bills, after all; but beyond that, I’d rather go get lost in San Francisco than talk to them about anything important.
The end conclusion that all this leads me to is that I have no reason to expect I’ll live all that long, let alone while avoiding depression. And that just sucks.
I’m kinda feeling like I’m close to exhausting what I can do and just need a genie to come save me. But that seems overly pessimistic. I can’t seem to come up with a plot more thorough than “Make money somehow”, which has the nasty problem of requiring that I can manage people and/or consistently work on something. I kinda feel like imagining what Harry James Potter Evans Verres would do in this situation is a better strategy than what I’ve been doing, but I can’t actually seem to do that (and half suspect it’d involve writing something awesome online, using the charisma to fund a startup, and using said startup to fund his escape).
I’m also tempted to repost this in tomorrow’s open thread, but it is a >8kb whine-fest, making me doubt that the utility of doing so outweighs the disutility of annoying everyone more than I have already.
(1) “Best friend” = person I spent the most time with at recess, maybe. The first and only out-of-school interaction we had was when I was 18 (and by complete coincidence it resulted in his new car getting totaled).
Oh, hey, we’re almost exactly the same age.
You’ve got a lot going for you. You can program, you can write, you can enjoy working, you have at least some college education. This is enough to build on.
Based only on this post, it looks like your biggest problem is your social paralysis. Solving this problem isn’t easy, but it’s possible. Comfort zone expansion (CoZE) seems like the recommended model for training these skills. Try doing things that are possible but make you feel awkward—say, spend five minutes at a social event and then leave, or eat a quick meal in the kitchen, or something. Don’t worry about doing these things confidently or well. It’s supposed to be difficult and terrifying; when you do something terrifying and the world doesn’t end, your brain will be less terrified to do it in the future. This should hopefully expand your comfort zone until you can eventually ignore strangers rather than flee from them, or ask your parents what they’ll use your money for instead of living in uncertainty.
Your relationship with your parents sounds really destructive. Changing that should be high priority, whether it’s by moving across the country or group therapy and reconciliation or whatever. I don’t think income is the biggest barrier to your independence. Mediocre programmers can do pretty well (and can often work from home), and you say you own property, which can presumably be rented or sold. I’m more worried about your independent-living skills; being able to manage the dozens of mundane tasks that parents take care of (e.g. buy groceries, get an air filter changed, pay bills on time) can be a struggle for a lot of people when they first move out. Reddit threads about “life pro tips,” or whatever the kids are calling it these days, will be your friend.
I have no idea how much blindness might exacerbate the problem. In any given city, there might or might not exist disability services that can help. My mom would probably be able to find out; let me know if I should ask her about any place in particular.
Applying to App Academy is exactly the kind of proactive, courageous thing you should be doing. Please take a moment to bask in my approval. The program sounds like it could provide everything you need, but it’s definitely high risk. You’ll be in a crucible where you have to live on your own, take care of yourself, and interact with humans. Either you’ll be forced to grow into a significantly more competent human being, or else you’ll get overwhelmed and burn out. If you get accepted (although my understanding is that such places are competitive) and decide to go, you’ll want to take what precautions you can. Work with the program to set up the supports to make sure you succeed. Leaning on the local rationalist community to do this in parallel, as you mentioned, is also a really great idea.
If you don’t go, do what you can to build your independence as soon as possible. You need those skills. Maybe you could do freelance coding online? Maybe you could move into that property you own? I don’t know. Change something.
You’re in a shitty place for now, but it looks like you’re on track to change it. You can gain the social skills, independence, and self-confidence you need to accomplish your goals. People in your situation have done it before. Mostly it seems to require the courage to actually try, and you already have that.
Thanks for all the encouragement.
I’m not overly optimistic that I have many opportunities to change anything, is the problem. If I do wind up at App Academy, I’d be surprised if that didn’t make a huge difference for the better. I can’t help but feel like that’s mostly all I have to bet on, though.
Something that makes this even more frustrating is that, had I realized enough of this just a year or two sooner, my opportunities to do something about it would have been far more numerous, simply by virtue of being at college and having access to more people and places (some of which were not unpleasant). But college was more about academics, and now the matter of paying for it is relevant, and both of those I’d like to avoid if at all possible.
I’m not sure how to respond to suggestions like “Go out and meet people” or “Go buy <-useful object->” (I’ve gotten these from elsewhere). Anything that involves me leaving the house is ridiculously difficult. I get the impression that this particular detail isn’t coming across very well when I try explaining the situation. “Gain the ability to do things outside the house” is more or less one of my current goals, not that I know how to achieve it.
There are other things that can provide the same benefits. Off the top of my head: a job where you don’t work from home, other coding boot camps, or CFAR. If App Academy falls through, you can pursue something else.
I don’t know anyone who doesn’t feel the same way about college, although the specific regrets vary from person to person. It is incredibly frustrating.
What’s the biggest difficulty standing in your way? Is it the physical travel or the social anxiety or something else entirely? If it’s a matter of location and transport, the first step is almost certainly “acquire money.” (Given your situation, I think “acquire money” is a hard but solvable problem. Maybe do something like earning two dollars on Mechanical Turk to break down ugh fields and start a success spiral?) Step two would be either “turn money into transportation” or “use money to move to civilization.”
More importantly, you can start building your skills without leaving the house. For example, if you’re training basic social skills, you could call an acquaintance from college or spend five minutes on chatroulette. Or work on getting to the point where you can move through your own house without fear—it sounds like that would improve your mood and productivity dramatically, and the resilience you’d acquire will help you everywhere you go.
This seems to be the only sentence in the post which includes the word blind. Are you blind?
For all intents and purposes, yes. My right eye may or may not do anything useful on occasion.
Blindness will make finding a job in many areas harder. On the other hand what useful skills do you have?
Writing (music and fiction), programming, and the ability to write hundreds of pages of notes without actually accomplishing anything. Best I can tell, I’m Mediocre at all of these. I was good at math until calculus 2 (more complicated antiderivatives and summations of infinite series are where I got lost), and rumors of my linguistic abilities have been greatly exaggerated (I might be able to manage in Tokyo, Northeast China or the Francophone world, though I can’t follow native conversations in either language very well at all). My website (The hotspots on the homepage are not properly scaled to the image) sorta-kinda demonstrates these. (I’ve been wanting to upgrade to a more consistent site layout for like four years now, but haven’t ever gotten around to it.)
I have a strong feeling that I would prefer manual labor to a programming job, although the latter is where I’m directing my vague attentions at present.
Suggestion: the open thread for each month should be pinned to the top of Discussion for the duration of that month. Otherwise, the longer the month goes on, the less likely a particular post in the open thread is to be read.
Pinned to the top might be too much, but I think it should be pinned to the top twice a week.
Second thought—I meant that that open thread should be moved to the top twice a week.
Norbert Weiner, a mathematician from MIT, postulated unfriendly AI in 1949.
Is CFAR’s Paypal donation page not working for anyone else, or is it just me? Both monthly and one-time donations fail to process the transaction.
Error messages after logging in to Paypal:
or
Nope, you’re not the only one. Yikes! Thanks for the heads up, we’ll look into it.
Imagine sufficiently strange aliens were peeking into our low-dimensional slice of totality. They’d see matter/energy states which change, matter/energy states which stay the same, change at different rates. They wouldn’t prima facie find “bipeds walking around” as any more special-consideration-worthy than “bubbles in a pond”, it wouldn’t trigger any “sentience alarms” (maybe their intuition rests on a nano scale).
Consider they were searching for something interesting, maybe approaching whatever life-analogues they defined. Systematically zooming through different processes at different scales. Now, data in itself is nothing without the interpretation that allows you to see the information represented by the data.
Consider such a strange alien looked at a computer: Only a miniscule fraction of the total processes going on—well below Bremermann’s limit—corresponds to even the physical layer in the stack. The relevant processes (not knowing which are relevant, and if there even is anything “relevant”) have to be isolated just to have the specific data for which an interpretation can be found (low “voltage” in this “gate” = “0″ in a binary system, or whatever). All just to unlock a preliminary step towards eventually, maybe, understanding that there is a game of minesweeper running on that computer. Would an alien linger long enough to make such a conclusion, before prematurely concluding the processes are probably on the same order of importance as the computation inherent in rain splashing on the ground?
Imagine two such aliens were in a contest, a race: One trying to find meaning (an interpretation under which information can be gleaned which indicates something interesting—agent-y, replicator-y) in a pile of matter/energy we’d call a desktop PC, the other alien trying to find such meaning within what happens in the sun, with its constant and fast reactions (cue “the suns are sentient” sci fi novels).
Think of the amount of “computation” which is happening when an avalanche is crushing down a hillside. Scenarios such as Douglas Adams’ Earth as a supercomputer to solve a problem may not even be that far fetched—all we need is an interpretation so that the computation that is going on all around us anyways can be interpreted for something useful.
We can certainly surmise that there seems to be no computation going on which we associate with sentience as we understand it, the strictly biological criteria of life aren’t met. But isn’t that like a civilization of bacteria ruling their host body to not be “sentient”, since its individual cells—specialized in their function—have trouble thriving on their own? We see complex computations all around us, how certain should we be that there is no interpretation we lack under which we’d find that the sun is actually computing interesting stuff—or having thoughts and concepts, for sufficiently strange definitions of thoughts and concepts?
Maybe similarly to how “information theoretic death” sensibly extends the old and narrow categories of “brain activity stopped, time to bury the body”, we should define something along the lines of “candidate process for having an interesting interpretation”, which could contain criteria such as “capacity to store information, delta of state change, amplitude of state change”, and so on.
My remembering of a standard exposition (found the source, see edit) goes like this: a beautiful waterfall is a complicated dynamic system, containing many more atoms than a human brain, all in motion. Were one clever, one could map the motion of water in part of the waterfall to motion of atoms and charge in a human brain. Then the waterfall is a person, thinking thoughts as it burbles. Except there is a problem, where each waterfall has many possible mappings, and thus spans the whole range of brains!
If one then asks the question “so why aren’t you a waterfall?” this is a sort of epistemological analogue to the Boltzmann brain hypothesis.
I seem to recall the original argument going a different place: “are waterfalls on-average blissful or suffering, and by how much do billions of waterfalls encoding all possible minds outweigh our petty human concerns?”
EDIT: Ah, found the source (ctrl+f “waterfall”), which references Putnam and Searle, and is worth a read in its entirety. A little discussion of ethical implications on LW can be found here.
Set Theory and Uncaused Causes
I’m relocating part of a thread that was originally on “Welcome to Less Wrong” but has wandered way off topic. It also seems that a remote ancestor comment was heavily downvoted, discouraging further contributions in the original place. So I’m moving into the Open thread.
Here are links to my latest version of the “recipe”, and to CCC’s response
As discussed, I have a new version which preserves the proof structure, but weakens the premises about as much as possible.
A1. The collection of all entities is a set E, with a causal relation C and a partial order P, such that x P y if and only if x is a part of y.
Note: This merges the assumption that P is a partial order into the overall set-up; that feature of P now gets used earlier in the argument.
A2. The set E can be well-ordered.
This ensures we can apply Zorn’s Lemma when considering chains in E, but is not as strong as the full Axiom of Choice. If the set E is finite or countable, for instance, then A2 applies automatically.
Definitions: We define a relation C such that x C y iff there are entities v, w such that v P x, y P w and v C w.
Note: This gives a broader causal relation which automatically satisfies “if x C y and x P z then z C y” as well as “if x C z and y P z then x C y”, loosely “anything which is caused by a part is caused by the whole” and “anything which causes the whole, causes the part”. So we don’t need to state those as extra premises.
We then define a further relation ⇐ such that x ⇐ y iff x = y, or there are finitely many entities x1, …, xn such that x1 = x, xn = y and xi C* xi+1 for i=1.. n-1.
Note: This construction ensures that ⇐ is a pre-order on E.
Say that a subset S of E is a “chain” iff for any x, y in S we have x ⇐ y or y ⇐ x. Say that S is an “endless chain” iff for any x in S there is some y in S distinct from x with y ⇐ x. We shall say that y is “uncaused” if and only if there is no z in E distinct from y with z C* y (this of course implies there is no z distinct from y with z C y, but it also implies that y isn’t part of anything which is caused by something distinct from y). Say that x is a proper part of y iff x is distinct from y but x P y.
A3. Let S be any endless chain in E; then there is some z in E such that every x in S is a proper part of z.
Lemma 1: For any chain S in E, there is an entity x in E such that x ⇐ y for every y in S.
Proof: Suppose S is not endless. Then there is some x in S such that for no other y in S is y ⇐ x. By the chain property we must have x ⇐ y for every member y of S. Alternatively, suppose that S is endless, then by A3, there is some z in E of which every x in S is a part. Now consider any y in S. There is some x not equal to y in S with x ⇐ y, so there are entities x = x1… xn = y with each xi C xi+1 for i=1..n-1. Further, as x P z we have z C x2 and hence z ⇐ y.
Lemma 2: For any x in E, there is some y in E such that y ⇐ x, and for every z ⇐ y, we must have y ⇐ z.
Proof: This follows from Zorn’s Lemma applied to pre-orders.
Theorem 3: For any x in E, there is some uncaused y in E such that y ⇐ x.
Proof: Take a y as given by Lemma 2 and consider the set S = {s: s ⇐ y}. By Lemma 2, y ⇐ s for every member of S, and if S has more than one element, then S is an endless chain. So by A3 there is some z of which every s in S is a proper part, which implies that z is not in S. But by the proof of Lemma 1, z ⇐ y, which implies z is in S: a contradiction. So it follows that S = {y}, which completes the proof.
We now partition E into three subsets. I are the “inert” entities, which do not cause anything and have no causes themselves. (Note that the new version allows there to be some of these, unlike the previous version; you can think of them as abstract entities like numbers, sets, propositions and so on, if you want to). Formally I = {x in E: there is no y distinct from x with x C y or y C x}. U are the “uncaused causes”—formally U = {x in E: there is no y distinct from x with y C x, but there is z distinct from x with x C z}. O are all the “other”, caused entities, so that formally O = {x in E: there is some y distinct from x with y C* x}.
B1. If S is any subset of U such that for any x, y in S we have x P y or y P x, (call such an S a “chain of parts”), then there is some entity z of which all members of S are parts.
B2. Suppose that y ⇐ x and z ⇐ x. Then there is some entity w such that: w ⇐ x; w ⇐ y or y P w; w ⇐ z or z P w.
(EDIT: Restated to ensure that Theorem 4 properly follows.) Informally, the idea is that y and z can’t independently cause x without any further causal explanation. So there must be some common cause, however each of them may be part of that common cause.
Definition: Say that entities x and y are causally-connected if and only if x=y or there are finitely-many entities x=x1,..,xn=y with each xi C xi+1 or xi+1 C xi for i=1..n-1.
B3. Any two entities x, y in O are causally-connected.
Informally, O doesn’t “come apart” into disconnected components, such as a bunch of isolated universes. Premises B1-B3 turn out to be necessary for Theorem 6 to hold, as well as sufficient (see below). So they can’t be made any weaker!
Theorem 4: For any x in O, there is a unique entity f(x) in U such that: f(x) ⇐ x, and any other y in U with y ⇐ x satisfies y P f(x).
Proof: For any x in O, define a subset U(x) = {y in U: y ⇐ x}; this is non-empty by Theorem 3. Consider any chain of parts S that is a subset of U(x). If it has at least two members, then by B1 there is some z in E of which all members of S are parts, and such a z must be in U. (If not, then note any w C z would also satisfy w C s for each member s of S, which would require them all to be equal to w). Also since y ⇐ x for any member of S and y P z we have z ⇐ x. So z is also a member of U(x). Or if S is a singleton—say {z} - then clearly all members of S are parts of z, and z is also in U(x). By application of Zorn’s Lemma to U(x), there is a P-maximal element f(x) in U(x) such that there is no other y in U(x) with f(x) P y. By B2, for any other y in U(x) there must be some z in U(x) with f(x) P z and y P z; given f(x) is maximal we have z = f(x) and so y P f(x). This makes f(x) the unique maximal element of U(x).
Theorem 5: For any x, y in O, f(x) = f(y) if and only if x and y are causally-connected.
Proof: It is clear that if f(x) = f(y) then x is causally-connected to y (just build a path backwards from x to f(x) and then forward again to y). Conversely, consider any two x, y in O:
a) If x C y, then f(x) is in U and satisfies f(x) ⇐ y so we have f(x) P f(y). Since x is not f(x), we have f(x) C x2 ⇐ x for some x2, and hence f(y) C* x2 ⇐ x i.e. f(y) ⇐ x which means f(y) P f(x), and so f(x) = f(y).
b) If z is in U with z C x and z C y, then z P f(x) so f(x) ⇐ y and f(x) P f(y); similarly, f(y) P f(x) so that f(x) = f(y).
The result now follows by induction on the length of the causal path connecting x to y.
Theorem 6: If O is non-empty, then there is a single entity g in U such that: f(x) = g for every x in O, and y P g for every y in U.
Proof: Assuming O is non-empty, take any element y in O, and set g = f(y); then the result that f(x) = g for any x in O follows from Theorem 5 and B3; further, for any y in U, there is some x in O with y C* x, so by Theorem 4, y P f(x). If there are no elements of O (meaning there are none in U either) then the Theorem is trivial.
Finally, note that B1, B2 and B3 are entailed by the statement of Theorem 6. For B1, we can just take g as the relevant z. For B2, we can take g as the relevant w. B3 follows using using the first part of Theorem 5 (just track from x back to g, then forward to y again).
I’m just about done now, so unless there are errors in the above proof will leave it. What are the residual weak points? Well, B2 and B3 have been weakened a bit, but are still basically unjustifiable (we can imagine them being false without absurdity) and the above re-work shows they are needed for the uniqueness conclusion (Theorem 6). Also, we have the weakness of not deriving anything else useful about g.
This will lead to a problem.
Consider assumption A3:
Consider any endless chain consisting of at minimum two elements. Consider two elements in that chain, x and y, such that x C y. x and y are both proper parts of z. Therefore, x C z, and z C y. But then we have a longer chain; using x C z C y in place of x C y. Each element of that longer chain must then, by A3, be a proper part of a larger entity, z2. But then, similarly, we can construct the chain using x C z C z2 C* y. There are therefore an infinite number of entities z, z2, z3, z4… and so on, each including the one before it as a proper part (and nothing that is not part of the one before it).
Furthermore, anything which is a proper part of anything else is a part of such an infinitely recursive loop by default.
This leads to trouble in the proof of theorem 3.
It follows that z C y but it does not follow that x C z or that y C z.. The “whole” z may be a cause of its parts, without in turn being caused by its parts. Note that by construction of C it is true that if x is a cause of y and x is a part of z, then z C y. However, it is not generally true that if x is a cause of y and z is a part of x then z C y.
As an example of the intuition behind this: suppose I have a thermostat box containing two circuit boards. Board 1 is connected into my home heating system; Board 2 is a spare not connected into anything. It is true that Board 1 causes my heating to come on. It is true that the thermostat (of which Board 1 is part) causes my heating to come on. It is false that Board 2 (which is part of the thermostat) causes my heating to come on.
You are right that when adding z, we now get a longer chain {x, y, z}, but this won’t in general be an “endless chain” (the new z may well be an end).
It does, because y P z.
x C y, and y P z. Therefore, x C z.
No, you need x C y and z P y to get x C z (be careful about which way the P relation is going).
The intuition is “Anything which is a cause of the whole is a cause of the part”, not “Anything which is a cause of the part is a cause of the whole”. Again, there are intuitive examples here. (Compare me baking a cake for a child’s birthday party vs just buying the cake from a shop, and putting a few sprinkles and candles on the top. In the second case, I am a cause of some part of the cake as presented to the child, but not the whole cake, and if someone says “Wow that cake tasted delicious!” I’d have to admit I didn’t make it, only decorated it).
I make a cake. I am a cause of the cake. The cake contains eggs. I am not the cause of the eggs. I think “what causes the whole, causes each part” is a bad intuition to have.
In general, I think it is an error, and a source of confusion, to think of things rather than events having, or being, causes. I know people sometimes do, and I’ve gone along with it in #1 above, but I think it’s a mistake.
Why would anyone assume A3? It seems really arbitrary. Exception: you might believe A3 because you believe in an entity of which all others are parts. See below.
If E includes an entity V of which all others are parts (call it “the universe”) then, provided C is reflexive, V C* anything-you-like. And I think it’ll then turn out that the way all your theorems work is that V is the canonical uncaused cause of everything. Which is a bit dull and wouldn’t satisfy many theists. Perhaps something more interesting happens if you make C irreflexive instead, so that things don’t count as causes of themselves.
Fair points, though there is in fact a lot of disagreement about what are the basic relata of the causal relation: see the SEP entry for example. When we apply causation to entities (which we can sometimes do, as in your example) then “A causes B” probably means something like “at least one event in which A is involved is a cause of every event in which B is involved”.
On counterexamples to “what causes the whole, causes the part” : possibly an even stronger counterexample considers just one of the atoms in the cake. However, we must be careful here: it is only some temporal part of the egg (or of the atom) which is part of the cake; the eggs/atoms in their full temporal entirety are NOT parts of the cake in its full temporal entirety. We could perhaps treat the relevant temporal part (“egg mixed into cake” or “atom within cake”) as an “entity” in its own right, but then it does seem that by making the cake, I am a cause of all the events which involve that particular “entity” (since I put the egg/atom into the cake in the first place).
In any case, note that the most recent version of the argument doesn’t actually need to assume this “cause-whole ⇒ cause-part” applies to C, since it only ever uses the constructed relation C instead. The conclusion is still interesting, since if nothing Cs the entity g, then nothing Cs it either, and if g causes some whole of which each entity is a part, that is still an interesting property of g. The argument makes no assumptions on whether C is reflexive or not.
On A3, I’m not totally sure of the circumstances in which we can aggregate entities together and treat them as parts of a single entity, but if the entities are causally related (and particularly if they are causally-related in an odd way, like an endless chain), then it does make some sort of sense to do this aggregation. After all, we immediately want to ask the question “How could there be an endless chain?” a question which does treat the “chain” as some sort of an entity to be explained. If entities are not causally related (they are in different universes), lumping them together seems much less natural.
Finally, on the “maximal entity” approach, CCC I believe discussed this in the original thread before I lifted here, and he seems to find it theologically interesting.
I agree that “x C y, and y P z. Therefore, x C z” is wildly unintuitive, causes problems, and is just plainly wrong. But...
...
...actually, looking back, you’re right. I apologise; I misread the definition of C* (I read w P y instead of y P w).
I’m going to have to look through it again before I can comment further.
Looking over the “recipe” again, I notice one thing I failed to notice earlier; at no point have you formally defined the uncaused cause as God. That’s a weak step, but if you actually want to use that argument to argue for monotheism (instead of merely the presence of an uncaused entity, nature unknown) then I think it’s a necessary step. Some of the assumptions necessary to eliminate multiple uncaused causes are a bit weak, but I think that you have a very good argument that somewhere in the history of the universe there must be at least one uncaused cause of some sort.
No, I omitted that step for reasons discussed in the earlier thread: this gives too weak a “God” to be any interest to anyone, and is downright confusing.
The only way I can think to get back to some form of traditional theism is to add a premise saying that “every entity not of type G has a cause” (insert your favourite G) and then perhaps to pull the modal trick of claiming all the premises are possible...
Okay, that’s reasonable.
I have spotted an error in the statement (and proof) of Theorem 5, and then Corollary 6. The issue is that for any uncaused y we must have f(y) = y, so if there are several uncaused entities then they can’t all have f(y) = g. The revised statements should go like this:
Theorem 5: Let x and y both have causes. Then f(x) = f(y) if and only if x and y are causally connected.
Proof: It is clear that if f(x) = f(y) then x is causally-connected to y (just build a path backwards from x to f(x) and then forward again to y). Conversely, suppose that x C y, then f(x) is uncaused and satisfies f(x) ⇐ y so we have f(x) P f(y); since x is caused, there are f(x)=x1,...,xn=x such that each xi C xi+1 for i=1..n-1, then by A3 we have f(y) C x2 and hence f(y) ⇐ x, which implies f(y) P f(x) and so by B1 f(x) = f(y). Next, suppose that for some uncaused z we have z C x and z C y; then z P f(x) which implies by A3 that f(x) C y and hence f(x) P f(y); similarly, f(y) P f(x) so by B1 f(x) = f(y). By an induction on the length of any other path connecting x to y, we have that f(x) = f(y).
Corollary 6: There is a single g in E such that: f(x) = g for every x in E with a cause, and every uncaused y P g.
Proof: Suppose there is at least one entity x with a cause, then set g = f(x). For any other caused entity y, f(y) = g by Theorem 5 and B5, and for an uncaused y, B5 implies y ⇐ x, so that y P g. Lastly, if there are no caused entities, then B5 implies that E = {y} for some uncaused y, so we can just pick g = y.
I have also spotted a way of weakening or removing some of the premises (in particular A3, and B1 to B4). I will update with that later today.
I’ve had another look at the argument, and spotted a way to remove the step that relies on “if x C z and y P z then x C y”, loosely “anything which causes the whole, causes the part”. Since there appear to be counterexamples to that as a causal intuiton, it’s a good idea to try to eliminate the step, even for a constructed relation based on C.
Here is a new version, which uses an alternative relation C’. The relation is still constructed on “entities”, under the assumption it makes some sense to talk of one entity as a cause of another (it could probably be adapted to other relata of causation like “events” or “states of affairs”). The conclusion of the argument turns out to be a bit weaker than before, but in a rather interesting way.
A1. The collection of all entities is a set E, with a causal relation C and a partial order P, such that x P y if and only if x is a part of y.
A2. The set E can be well-ordered.
Note: This ensures we can apply Zorn’s Lemma when considering chains in E, but is not as strong as the full Axiom of Choice. If the set E is finite or countable, for instance, then A2 applies automatically.
Definitions: We define a relation C’ such that x C’ y iff there is some z which is part of x, but not part of y, and which satisfies z C y.
Note: This gives a relation which automatically satisfies “if x C’ y and x P w then w C’ y”, loosely “anything which is caused by a part is caused by the whole”. Also, C’ is guaranteed to be irreflexive since it is impossible to have x C’ x (no z can be simultaneously part of x and not part of x), loosely “no entity is a cause of itself”. These are plausible conditions on the underlying relation C, in which case we have C’ = C, but the construction of C’ means we don’t need to state these conditions as extra premises. We will see below that something else interesting can happen if C does not meet these conditions.
We then define a further relation ⇐ such that x ⇐ y iff x = y, or there are finitely many entities x1, …, xn such that x1 = x, xn = y and xi C’ xi+1 for i=1.. n-1.
Note: This construction ensures that ⇐ is a pre-order on E.
Say that x is “causally dependent” on y iff x is distinct from y but x ⇐ y; say that y is “dependent” iff there is some x on which y is causally dependent, and “independent” otherwise (this is equivalent to saying that any z ⇐ y satisfies z = y). Say that y is “wholly independent” if and only if y is not part of any dependent entity (which implies that y itself is independent). Say that a subset S of E is a “chain” iff for any x, y in S we have x ⇐ y or y ⇐ x. Say that S is an “endless chain” iff for any x in S there is some y in S such that x is causally dependent on y. Say that x is a “proper part” of y iff x is distinct from y but x P y.
A3. For any endless chain S in E, there is some entity z in E such that every x in S is a proper part of z.
Informally, the “endless chain” describes an odd causal relationship between entities : either an infinite descending sequence of causes, or a circle of causes. A natural question to ask is how there could be such a chain, and this involves treating the chain itself as some sort of entity to be explained.
Notice that if there are no endless chains at all, then A3 is true vacuously. Also, A3 is true provided arbitrary collections of entities can be aggregated together to form an entity, but the argument doesn’t need to rely on such a strong assumption. Finally, A3 is true if there is a “maximum” entity w, one of which every other entity is a proper part; but again the argument doesn’t need to rely on such an assumption.
Lemma 1: For any chain S in E, there is some entity w in E such that w ⇐ y for every y in S.
Proof: Suppose S is not endless. Then there is some x in S such that for no other y in S is y ⇐ x. By the chain property we must have x ⇐ y for every member y of S. Alternatively, suppose that S is endless, then by A3, there is some z in E of which every x in S is a part. Now consider any y in S. There is some x not equal to y in S with x ⇐ y, so there is an entity x2, with x C’ x2 ⇐ y. Further, as x P z we have z C’ x2 ⇐ y and hence z ⇐ y.
Lemma 2: For any x in E, unless some w ⇐ x is wholly independent, then there is some dependent y ⇐ x, such that for any dependent z ⇐ y we have y ⇐ z.
Proof: This follows from Zorn’s Lemma for pre-orders. Consider the set Y = {y: y is dependent, and y ⇐ x}, and observe that for any chain S in Y, Lemma 1 gives some w with w ⇐ s for every s in S and also w ⇐ x. Either this w is wholly independent or it is part of some dependent y with y ⇐ s for every s in S and y ⇐ x, so that this y is also in Y. Zorn’s Lemma now implies that Y contains some element which is minimal with respect to ⇐ within Y. To conclude the proof, consider any other dependent z ⇐ y; then z ⇐ x as well, so that z is in Y and y ⇐ z.
Theorem 3: For any x in E, there is some w ⇐ x which is wholly independent.
Proof: Suppose that no w ⇐ x is wholly independent, take a y satisfying Lemma 2, and consider the set S = {s is dependent: s ⇐ y}. Since y is dependent, there is some w C’ y, and some dependent v with w P v, so that v C’ y, and hence v is not equal to y. Hence S contains at least two members, and by Lemma 2, y ⇐ s for every member of S, and so S is an endless chain. So by A3, and by assumption that no z ⇐ x is wholly independent, there is some dependent z of which every s in S is a proper part, which implies that z is not in S. But by the proof of Lemma 1, z ⇐ y, which implies z is in S: a contradiction. So it follows that some y ⇐ x is wholly independent.
Now for the “uniqueness” part. As before, we partition E into three subsets. I are the “inert” entities, which do not causally-depend on anything and have no causal dependencies themselves. (These could be abstract entities like numbers, propositions and so on). Formally I = {x in E: there is no y with x C’ y or y C’ x}. U are the “uncaused causes”—formally U = {x in E: there is no y with y C’ x, but there is z with x C’ z}. O are all the “other”, caused entities, so that formally O = {x in E: there is some y with y C’ x}. We will also let W be the “wholly uncaused causes”—formally W = {u in U: there is no x, y with u P y and x C’ y}.
B1. If S is any subset of W such that for any x, y in S we have x P y or y P x, (call such an S a “chain of parts”), then there is some entity z of which all members of S are parts.
B2. Suppose that y ⇐ x and z ⇐ x. Then there is some entity w ⇐ x such that: w ⇐ y or y P w; w ⇐ z or z P w.
Informally, the idea is that y and z can’t independently cause x without any further causal explanation. So there must be some common cause, however each of them may be part of that common cause.
Definition: Say that entities x and y are “causally-connected” if and only if x=y or there are finitely-many entities x=x1,..,xn=y with each xi C’ xi+1 or xi+1 C’ xi for i=1..n-1.
B3. Any two entities x, y in O are causally-connected.
Informally, O doesn’t “come apart” into disconnected components, such as a bunch of isolated universes. Premises B1-B3 turn out to be necessary for Theorem 6 to hold, as well as sufficient (see below). So they can’t be made any weaker!
Theorem 4: For any x in O, there is a unique entity f(x) in W such that: f(x) ⇐ x, and any y in U with y ⇐ x satisfies y P f(x).
Proof: For any x in O, define a subset W(x) = {y in W: y ⇐ x}; this is non-empty by Theorem 3. Consider any non-empty chain of parts S that is a subset of W(x). Since W(x) is a subset of W, then by B1, there is some z in E of which all members of S are parts, and such a z must also be in W. (If z is part of some dependent x then so is some s in S part of x, which is impossible because s is in W). Also since y ⇐ x for some member y of S, where y cannot be equal to x, and y P z, we have y C’ x2 ⇐ x for some x2, so z ⇐ x, as in the proof of Lemma 1, and z is also a member of W(x). Even if S is empty, we can take some z in W(x), and then trivially all members of S are parts of z. By application of Zorn’s Lemma to W(x), there is an element f(x) in W(x) which is maximal with respect to P i.e. there is no other y in W(x) with f(x) P y. Now, by B2, for any other y ⇐ x in U there must be some z ⇐ x with f(x) P z and y P z; this implies z is in W, and given f(x) is maximal in W(x) we have z = f(x) and so y P f(x). This makes f(x) the unique maximal element of W(x).
Theorem 5: For any x, y in O, f(x) = f(y) if and only if x and y are causally-connected.
Proof: It is clear that if f(x) = f(y) then x is causally-connected to y (just build a path backwards from x to f(x) and then forward again to y). Conversely, consider any two x, y in O:
a) If x C’ y, then f(x) is in U and satisfies f(x) ⇐ y so we have f(x) P f(y). Since x is not f(x), we have f(x) C’ x2 ⇐ x for some x2, and hence f(y) ⇐ x which means f(y) P f(x), and so f(x) = f(y).
b) If z is in U with z C’ x and z C’ y, then z P f(x) so f(x) ⇐ y and f(x) P f(y); similarly, f(y) P f(x) so that f(x) = f(y).
The result now follows by induction on the length of the causal path connecting x to y.
Theorem 6: If O is non-empty, then there is a single entity g in W such that: f(x) = g for every x in O, and y P g for every y in U.
Proof: Assuming O is non-empty, take any element y in O, and set g = f(y); then the result that f(x) = g for any x in O follows from Theorem 5 and B3; further, for any y in U, there is some x in O with y C’ x, so by Theorem 4, y P f(x). If there are no elements of O (meaning there are none in U either) then the Theorem is trivial.
Note that B1, B2 and B3 are entailed by the statement of Theorem 6. For B1, we can just take g as the relevant z. For B2, we can take g as the relevant w. B3 follows using using the first part of Theorem 5 (just track from x back to g, then forward to y again). The premises have been weakened a bit, but are still basically unjustifiable (we can imagine them being false without absurdity) and the above re-work shows they are needed for the uniqueness conclusion (Theorem 6).
One interesting fact is that Theorem 6 now has a loophole that wasn’t there before. If C is not equal to C’, then it is possible that there is some x C g, provided any such x is part of g (i.e. g contains any cause for itself within itself). This even allows Theorem 6 to be consistent with a premise like this:
B4: For any entity x, there is some y with y C x.
Informally, “every entity has a cause”, a premise which is usually considered a fatal inconsistency in a first cause argument! With a bit of renaming, the set O could be said to consist of “ordinary” entities (ones which have causes that are not parts of themselves) and the remaining non-inert entities are “extraordinary” entities (ones which contain all their own causes). W are “wholly extraordinary entities”, ones which are not part of any ordinary entity. Theorem 6 implies that every ordinary entity is causally dependent on a single wholly extraordinary entity, one which contains every other extraordinary entity.
Again, there aren’t any particularly theistic conclusions here, since g could very well be a maximum entity if there is one—say g is the whole universe or multiverse. In that case, g contains every ordinary entity as well as containing all the extraordinary entities.
At the very least that many levels of positive votes in a row will likely insulate you from having the attempt to circumvent the troll tax summarily downvoted then banned. Since doing this isn’t too much additional work (and deep nesting is a nuisance in other ways anyhow) it probably suffices to culturally support or accept transplanted conversations that turn out to be valuable despite an early outlier. (It isn’t something that happens too often.)
Does anyone know how to view or expand the snippet of a google search result? One can lengthen their query to include what they know comes after or before an ellipsis, but even if one remembers, eventually the snippet stops expanding.
I’m trying to access a page of a site that has been deleted from the site’s servers (apparently), but when the right query is entered into google, text from the absent page is displayed in a snippet. The whole text appears to be somewhere—either hidden on the site or in google’s database—as modifying the query changes the snippet (but not to the critical part of the text which I’ve forgotten), but webcache.googleusercontent.com hasn’t saved the page url, and a search of site:webcache,googleusercontent.com/search?q=cache:[site] yields no results.
It’s not just this particular case that bothers me; I’ve experienced this before and want to know how to access the whole of the snippet text. The Wayback machine does not creep extensively enough to solve this problem.
If you require specific site information please pm me. Thank you.
http://archive.org/index.php
The IA is great, but lamentably incomplete as I have learned the hard way many times before (so I set up my own system to try to get stuff into the IA). I don’t think there’s any solution to Zaine’s problem if the
cache:
operator is no longer working for a page.I would like to help to create LessWrong communities in the Russian speaking countries, can anyone provide me with site visitor statistics from Russia and CIS?
13 of the respondents to the latest survey said they were from Russia.
I’m not confident this is the right outlet (and if so I apologize) but does anyone have tips on good data sources for ex; poultry statistics—trying to get hold of data for each individual country the amount of eggs produced on a year by year, country by country basis. appreciate any tips! Where do you go to find your data? (choose to make this an open question)
The UN has those numbers. I’ve only used the old version.
Thank you I have come across FAO—proved to be very useful!
1) Find academic papers on the subject, see where they got their data 2) Data-only search sites like Zanran, or set google to search only for .xls 3) Question and answer sites, like r/datasets
WolframAlpha has access to a lot of this kind of data. Example search.
Sorry, I’ll ask a really dumb question here because it’s the middle of the night and my brain doesn’t work. What’s the “official” Bayesian response to this joke (see part 2)? To summarize, when a Bayesian talks about a coin with unknown bias, that involves a prior over possible biases, i.e. a subjective probability distribution over objective probability distributions. But Bayesians are supposed to think that objective probabilities don’t exist (“meaningless speculations about the propensities of different coins”). So how does that make sense?
The coin was made in one of several ways. (Perhaps these ways are parametrized by the ratio of weights between the two sides of the coin.) You have a subjective probability distribution over this set W of possible ways in which the coin was made, according to which p(w) is the probability that the coin was made in way w. This distribution should come from a maximum entropy prior incorporating all your knowledge about the origin of the coin.
Furthermore, for each way w in W, you have a conditional probability p(H | w), which is your subjective probability that the coin will turn up Heads given that the coin was made in way w. This conditional probability distribution incorporates your physical knowledge about how weight ratios in coins influence the dynamics of their flipping.
Finally, you compute the unconditional probability p(H) that the coin will come up heads by summing up the values p(H | w) * p(w) over all ways w in W.
Ah, right, my beliefs about each particular kind of coin are also subjective. That’s a good answer, thanks.
Laplace’s rule of succession is for this problem. Laplace defines orthodox bayesianism, not the heretic (infidel?) you quote. Why shouldn’t I believe that sometimes objective probabilities make sense, even if not all probabilities are objective? In any event, I can choose it as a model of the coin. I have a prior over biases and update it every time I see a flip of the coin. Ahead of time, my prediction for each flip is the same, but after a few flips, my new prediction is different from my old prediction. Of course, I could have other models, like that Alice always flips heads and Bob always flips tails, but if I’m flipping and I know I’m not cheating, then a constant bias seems like a pretty good model to me.
Laplace’s rule of succession implicitly relies on a particular choice of prior over possible biases, so I don’t see how this answers the question.
If you have no information, you choose a prior that reflects the fact that you have no information. One way of doing this is the principle of maximum entropy, which tells you in particular that if you’re trying to choose a prior over a parameter that lies in the interval [0, 1] (the bias of the coin), the maximum entropy prior is the uniform prior. If you have other information, you incorporate that into your choice of prior.
Category theory gives a few hits at LW, but doesn’t seem to be recognized very wildly. On a first glance it seems to be relevant for Bayes nets, cognitive architectures and several other topics. Recent text book that seems very promising:
Category theory for scientists by David I. Spivak: http://arxiv.org/abs/1302.6946
Abstract: There are many books designed to introduce category theory to either a mathematical audience or a computer science audience. In this book, our audience is the broader scientific community. We attempt to show that category theory can be applied throughout the sciences as a framework for modeling phenomena and communicating results. In order to target the scientific audience, this book is example-based rather than proof-based. For example, monoids are framed in terms of agents acting on objects, sheaves are introduced with primary examples coming from geography, and colored operads are discussed in terms of their ability to model self-similarity.
These texts can work as an introductory undergraduate sequence (with “Sets for Mathematics” going after enough exposure to rigor, e.g. a real analysis course, maybe some set theory and logic, and Awodey’s book after a bit of abstract algebra, maybe functional programming with types, as in Haskell/Standard ML/etc.):
F. W. Lawvere & S. H. Schanuel (1991). Conceptual Mathematics: A First Introduction to Categories. Buffalo Workshop Press, Buffalo, NY, USA.
F. W. Lawvere & R. Rosebrugh (2003). Sets for Mathematics. Cambridge University Press.
S. Awodey (2006). Category Theory. Oxford Logic Guides. Oxford University Press, USA.
Second the recommendation of Lawvere and Schanuel. It really communicates the categorical way of thinking without requiring a lot of mathematical background (more traditional texts on category theory will talk about things like algebraic topology which historically motivated category theory but aren’t conceptually prior to it).
The Candle Problem is an experiment which demonstrates how time pressure and rewards can diminish people’s ability to solve creativity-requiring problems. People who weren’t offered rewards for solving a clever problem solved the problem faster than those who were offered even significant rewards. Another finding was that when the problem was simplified and the creativity requirement removed, the participants who were offered rewards performed the task much faster.
There are multiple theories that try to explain the result of this study, and the many other studies with similar results. One theory which is still developing, but which seems obvious when studying neuroscience of the brain (summarized here in very broad strokes): Currently it is thought that the left hemisphere of the brain is the more dominating one and capable of suppressing the right hemisphere. Additionally, the right hemisphere is the one associated with creating new, or ‘creative’, connections from known information. The evolutionary perspective is that the left hemisphere operates in a more immediate, focused mode of thinking, while the right hemisphere interprets the world in a larger context. A practical example of this is how birds prefer to use their left hemisphere to focus on searching food, and the right hemisphere to maintain an alertness of their surroundings. The left hemisphere is the preferred one for handling trained, familiar tasks, while the right hemisphere is more active when dealing with the unfamiliar.
With the added context of studies that indicate much higher activity in the right hemisphere when solving insight (a-ha! moment) requiring problems, and studies that show the right hemisphere is more active in relaxed states, it seems quite likely that when a reward is offered and pressure is put on the participants in the Candle Study, the left hemisphere is chosen as the processor for solving the issue at hand. Meanwhile if there is no direct pressure or reward, the right hemisphere is more active in contributing to the solving process.
Peter Singer on Effective Altruism on TED
Daniel Dennett’s tools for thinking
Some of it strikes me as very likely to be correct—learn from your mistakes, respect your opponent, choose opponents worthy of respect (my re-phrasing of his “don’t waste your time on rubbish”). Some of it is ideas I’m going to check—“surely” and rhetorical questions are what people do to shore up weak points in their arguments.
Social anti-induction: “People seem to like me, therefore their patience for me is not yet exhausted, but it eventually will be.”
I’m looking for more on the should-universe you occasionally see referenced around lesswrong.
So far all I can see is some vague references from EY (eg http://lesswrong.com/lw/2nz/less_wrong_open_thread_september_2010/2k50 )
Anyone got anything?
I don’t think they are quite the same, but the Just World Hypotheses seems related.
Yesterday, I looked up information regarding what may be one of the issues with my right eye, based on one conversation I heard between my father and an eye doctor a few years ago. It kinda made me update my and the doctor’s levels of rationality downward a little, but I also realize I’m only working with surface-level information from like two Google searches and the Wikipedia article.
Uveitis is an inflammation of the uvea, generally the iris and surrounding tissues, which can lead to photophobia and vision loss. I looked up the Wikipedia article on Uveitis at some point more than two years ago but probably less than 8 years ago, and was not convinced that it was even the correct condition (I had no memory of the term before that conversation, but I didn’t pay much attention to all the incomprehensible medical terminology much before then; the causes didn’t seem to match my observations, etc). Considering that the symptoms match my experience pretty closely, this was probably stupid. I was focusing on three particular factors that seemed causal to me: namely, I was only 2 years into puberty when it became a problem, I’d been taking eyedrops in an unusual way just before the incident and then misplaced them a couple days before, and the worst symptoms began with my first attempt to look around on the streets of Las Vegas in mid summer. It definitely felt like look in the general direction of sun while outside in Las Vegas → hereto unfamiliar eye pain → photophobia and rapid vision loss. So the first time I read the Wikipedia article and saw that infection is a common cause, I completely failed to update away from “Las Vegas nuked my eye” toward “maybe a bacteria did it”. Considering that one of the common pathogens associated with the condition is apparently Toxoplasmosis Gondiae, and I’d been around an infant both known to carry it and have lymphatic issues for thirteen months prior to visiting Las Vegas (not to mention that I’d touched a corpse with the same just before said infant’s birth), that’s some poor updating on my part.
The more bothersome part, though, is that I took the doctor at his word when he said that the only practical treatment for Uveitis (presumably uveitis-caused vision loss) would be a bionic eye. Yet as early as 2005, the FDA had approved Retisert, which was an improvement over the previous method of shutting down the immune system to get the inflammation under control. An August 2011 report from the National Institute of Health showed that such treatments (either through implants, or inserting capsules into the eye) tend to reduce symptoms entirely, and in some cases resulted in a vision improvement of up to one line on the eye charts (I have no idea what one line on the eye charts means, or if this is significant at the stage of my vision loss). They found that capsules are much safer, but implants work more quickly; otherwise, they’re equally effective.
What remains unclear to me is whether Retisert is only effective against specific types of Uveitis, whether or not it would have a non-negligible probability of improving my vision to a useful degree, whether it’s actually available to me, etc. (If I had $5000 to throw around, I’d consider hiring Meta Med to look into it. I have less than $1000 to my name at the moment, so that won’t be happening any time soon.).
(Annoyingly, the first Google result I got for Uveitis treatment was a discussion thread with multiple people recommending homeopathy. This makes me wonder if just pouring water into one’s eyes can temporarily reduce inflamation about as well as most prescription drops, but there’s probably a study suggesting otherwise and this was just a couple people with a decent placebo boost.)
I don’t suppose any users here have experience with trans-cranial direct current stimulation. More specifically, the Focus V1?
http://www.foc.us/
tDCS has come up before many times. Did you search the site and read previous comments?
Yes.
Yes. Results were generally negative.
However, I was unable to find any results regarding this new rig.
I’m thinking of making a Discussion post about this, but I’m not sure if it has already been mentioned.
We’re not atheists—we’re rationalists.
I think it’s worth distinguishing ourselves from the “atheist” label. On the internet, and in society (what I’ve seen of it, which is limited), the label includes a certain kind of “militant atheist” who love to pick fights with the religious and crusade against religion whenever possible. The arguments are, obviously, the sames ones being used over and over again, and even people who would identify as atheists don’t want to associate themselves to this vocal minority that systematically makes everyone uncomfortable.
I think most LessWrongers aren’t like that, and don’t want to attach a label to themselves that will sneak in those connotations. Personally, I identify as a rationalist, not an atheist. The two things that distinguish me from them:
Social consequentialism. I know conversations about religion are often not productive, so I’m quick to tap out of such discussions. Unlike a lot of atheists, I could, in principle, be persuaded to believe otherwise (given sufficient evidence). If judgement day comes and I see the world burning around me, I will probably first think that I’ve gone insane; but the probability I assign to theism will increase, as per Bayes’ Theorem.
Note that this feeling is dependent on who you know, so I might be a minority in the label I see attached to atheism.
What do people think? I wrote this pretty quickly, and could take the time to a more coherent text to post in discussion.
Beware of identifying in general. “We” are all quite different. Few if any of “us” can be considered reasonably rational by the standards of this site.
With a sizable minority of theists here, why is this even an issue, except maybe for some heavily religious newcomers?
That’s a good point, which I’ll watch out for in the future.
One thing I didn’t specify is that this applies to discussions with non-LessWrongers about religion (or about LessWrong). On the site, there’s no point in bothering with this identification process, because we’re more likely to notice that we’re generalizing and ask for an elaboration.
Living in rural America, where Atheism is still technically illegal in some places even though no one would dare enforce it, I think distinguishing the labels “rational thinkers” from “atheists” is a very good idea. I don’t think someone who considers themselves rational and theist would be particularly proud to associate with the label that best fits their particular brand of theism (Roman Catholicism and Mormonism seem to spawn subverters of this expectation, but reducing to the common category of “christian” seems to invoke way more cultural baggage).
Or rather, I wouldn’t dare call myself a christian or an atheist anywhere anyone could possibly find out about. Smart people would dismiss me as inferior for the former, 90% of people within a 200mi radius would start hurling crosses at me for the latter. Probably will need allies in both groups, so I’m kinda concerned about this whole labels thing.
I would be interested to hear which places in rural america, and specifically what law makes atheism unlawful.
It’s not illegal in terms of private life, but certain jobs, particularly public office are forbidden outright to atheists on the state level. I’m more worried about the social consequences, though, since it obviously wouldn’t hold up in court.
I don’t remember exactly where I read about these laws, so it’s entirely possible I’m completely mistaken. It was in the past year, though, which makes me a little more confident that I had reason to trust the source.
Please take a moment to think of how you would choose a term to google to check whether these laws actually exist.
Yes, it is true that 7 state consitutions ban atheists from holding office. This requirement has been struck down by the supreme court. But that doesn’t stop people from agitating about enforcing them.
Woah, woah. Back up. Seriously? This is a thing? How on earth has this survived despite, y’know, being illegal?
Remember, “Y is technically X” means “Y is not X, but I’m being disagreeable.”
In the U.S., when laws are stricken down as unconstitutional, they are not automatically repealed or removed from the statute books. They are just ignored and not enforced.
For instance, after the Supreme Court case Lawrence v. Texas, all state sodomy laws were ruled unconstitutional. However, the states don’t have to formally repeal them (which would require effort from their legislatures) — rather, those laws are simply considered null and void, unenforceable. Some states went and repealed them anyway, but Oklahoma, Kansas, and Texas have not.
So yes, there may be some states or towns that have laws on the books discriminating against atheists, or imposing punishments for blasphemy, or even requiring everyone to go to church. But because these laws are null and void under all sorts of court rulings, it is incorrect to say that atheism is illegal; just as it is incorrect to say that sodomy is illegal.
(There are certainly plenty of people — including many government officials, government school teachers, etc. — who discriminate against atheists, of course. And against Jehovah’s Witnesses, Wiccans, and other religious minorities.)
The particular law I’m thinking of hasn’t come up in court, but a rather similar one in another state has and was overturned by the supreme court.
Now that I’ve tried explaining it, I’m worried my wording was dark artsy enough to warrant a retraction.
I’m pretty sure it’s survived mainly because no one ever bothered challenging it. To do so would require revealing one’s self as sympathetic to atheists, if not atheist one’s self, which would be a death sentence to one’s career if they lived in-state afterward (maybe they could work out of one of the more liberal colleges; the one I’m thinking of is religiously affiliated, but is way more tolerant than the general population, to the best that I can discern).
What specific law are you thinking of?
Me neither, but unfortunately I’ve seen at least one form (on an online dating site, IIRC) that doesn’t allow you to leave the “Religion” field blank.
I’m comfortable calling myself an atheist (though I rarely need to), but only because I believe in zero gods and that qualifies me for the label in the eyes of almost everyone. In other words, I treat “atheist” as a feature of my worldview, not as an identity. Sure, these evangelical atheists people are so concerned over might share that feature with me, but we also share opposable thumbs and a well-developed prefrontal cortex, and I’m not too worried about that.
This seems like a common enough take on the word that I don’t risk misunderstandings unless I’m dealing with people from highly religious subcultures who’ve never met an atheist in the wild. Inadvertent identity pollution might be an issue if the set of atheists was more narrowly defined, but there’s no agenda attached to atheism and precious little in the way of unifying features besides the obvious.
On the other hand, I’m distinctly uncomfortable calling myself a rationalist. Partly because the term has a philosophical meaning which is quite unlike that common here, but mostly because it implies adopting a subcultural identity, and that’s playing with fire: ingroup biases are so pervasive, and so easy to accidentally fall into, that you should generally only do that when you have a positive reason to. Even if that weren’t the case, LW is such a small group, and in many ways such a strong outlier, that dressing up a in LW-specific identity is going to carry far more baggage in the outside world than a term as broad as “atheist” would.
I prefer calling myself an agnostic in public. The social connotations for this seem to be equivalent with a non-militant atheism, and functionally atheism and agnosticism seem to be no different. It also fits in with a more bayesian view of knowledge, where something can be improbable from an epistemic standpoint but you still allow for the possibility that it can be ontologically true. Agnosticism also implies that you’re open to updating your beliefs if you acquire new knowledge, whereas atheism isn’t seen that way to the general public.
At least on LW and at the meetups I’ve been to, I haven’t seen people claiming atheist identity. I agree with your prescription, but think most people obey it already.
Where the probability at the moment? How high would it rise for experiencing 1 hour of judgment day, the world burning around you?
Technically, only the fact that the world started burning is significant. That a big fire continues to burn for 1 hour, that is not so surprising; unless the fire is violating the known laws of physics in some other ways.
If your explanation for the fire is that you got insane and see hallucinations, than the amount of time you keep perceiving those hallucinations might matter.
The continuation of the burning makes the hallucination hypothesis less probable, for as long as it continues. Also, if it continues past the laws of physics, as you point out.
I don’t have enough experience to even give an order of magnitude, but maybe I can give an order of magnitude of the order of magnitude:
Right now, the probability of Christianity specifically might be somewhere around 0.0000001% (that’s probably too high even). One hour post judgement-day, it might rise to somewhere around 0.001% (several orders of magnitude).
Now let’s say the world continues to burn, I see angels in the sky, get to talk to some of them, see dead relatives (who have information that allows me to verify that they’re not my own hallucinations), and so on...the probability could bring the hypothesis to one of the top spots in the ranking of plausible explanations.
...assuming that I’m still free to experiment with reality and not chained and burning. Also assuming that I actually take the time to do this as opposed to run and hide.
What do people mean by this sort of probability estimate, this one from Angelina Jolie’s NYTimes article? “My doctors estimated that I had an 87 percent risk of breast cancer and a 50 percent risk of ovarian cancer, although the risk is different in the case of each woman” (Italics added.)
Do they mean:
“Don’t mistake a high probability for certainty. In particular, don’t accuse me of misleading you if I state a high probability and the outcome does not occur.”
“Don’t think that because the probability of a bad outcome is less than 100%, you can do some wishful thinking and ignore the risk.”
“With a little effort, we could acquire more evidence allowing us to refine the probabilities for your case.”
Or something else?
Often, if you ask someone for the probability (or frequency) of some outcome based on their experience with a given reference class, they refuse to give a number, likewise saying that “each case is different.” In these cases, a fourth reason is possible, namely that they are too lazy to do the estimation.
I understand that not everyone is a Bayesian black-belt, but I am trying to figure out what implicity assumption motivates people to talk this way.
I assumed it was just a way of saying different women fall into different reference classes for the purposes of estimating breast cancer risk (e.g. an alcoholic woman with a positive BRCA1 test result and a vitamin D deficiency vs. a teetotaller with no harmful BRCA mutations and no vitamin deficiencies).
Thanks, I think that’s it. She means “medical science has given us more detailed results than just a blanket probability across all female humans. Using various sorts of information, doctors can give each woman a more refined probability estimate.”
There is no implicit assumption. She was apparently tested for BRCA1 based on family history, and was found positive. The correlation between BRCA1 and those cancers yields a certain percentage of risk, a calculation into which family history might also account. She links to here: http://cancer.stanford.edu/information/geneticsAndCancer/types/herbocs.html
Your third option is correct—although both effort, will and resources to acquire genetic testing are required.
Yes, you got it. She’s saying “If you go to your doctor and do some tests, you can get an estimate targeted at you.”
Who’s they? You are reading a text written by Angelina Jolie for a general audience. She has to make certain that no woman reader who reads the story comes away with thinking that she also has a 87 percent risk.
What does a Bayesian black-belt do, when the only numbers he has are come frequentist statistics that someone else did?
Thanks, satt and zaine answered it.
Introspecting a bit, I realize that my question was motivated not sy Angelina so much as by various refusals I have encountered to give a probability/frequency estimate, even when people are well-positioned to give one.
I think it is often motivated by a tendency to withhold information in order to maintain power in human interaction; but in many cases its the first and second options above.
I have a few pieces of knowledge that I think could be somehow synthesized to form a really powerful idea with a lot of implications.
In Yvain’s excellent “A Thrive/Survive Theory of the Political Spectrum” (read it if you haven’t already!) he makes a really compelling argument that “rightism is what happens when you’re optimizing for surviving an unsafe environment, leftism is what happens when you’re optimized for thriving in a safe environment.”
It seems to me that a similar analogy can be made with happiness, where happiness is thrive mode and depression is survive mode. I’m not sure how widely accepted this theory is, but it seems kind of intuitive and obvious to me—happiness is what happens when things are going well, depression is what happens when things aren’t, and the two moods must serve some sort of evolutionary function, right?
There is a direct relationship between happiness and political ideology, where the more right-wing you are, the happier you are, i.e. far right people are the happiest, far left people are the least happy. (I got this information from the General Social Survey by correlating the happiness and political ideology variables, but it was a few months ago when I did this, and the website is really confusing and I can’t figure out how to display it the way I had it before, let alone link to it. So maybe you’ll have to take my word for it.)
So it seems like the strategy you’re using for yourself is the opposite of the strategy you’re using for society, or something? Yvain theorizes in the post how someone who formulates a heuristic for themselves early on in their life that says “the world is basically dangerous” will become a right-winger, and someone with the opposite heuristic will become a left-winger, and this explains why people divide so easily into political categories. But it seems like the reverse is in fact true? This makes sense with religiosity (believing in a loving God who will keep you safe) being correlated with right-wing beliefs and poverty (growing up in a dangerous environment where survival is uncertain) being correlated with left-wing beliefs.
I don’t really know what to make of this, but it seems like it could be really important.
Depression causes inactivity, helplessness, not dealing with problems, cutting social ties, and sometimes suicidal behavior. It’s hard to see how any of that could be evolutionarily beneficial in any scenario. (Which is not to say it’s not an evolved behavior due to some different mechanism, like being tied to otherwise beneficial genes.)
I’ve often thought the symptoms of depression seem similar to those of hibernation. Depressed people usually sleep more, decrease activity levels, cease romantic/sexual efforts, withdraw/hide from others, etc. That behavior pattern seems like it might be adaptive during temporary hardships (drought/famine/winter, etc.) where you really are helpless to change circumstances, and nothing you can plausibly do would help.
If depression was hibernation-like you’d expect oversleeping and undereating. Melancholic depression causes insomnia and undereating, atypical depression causes oversleeping and overeating.
Yvain makes some good points, but I don’t buy his argument, in part because I think that a single political dimension (aka one left-right axis) is inadequate for any in-depth analysis.
I don’t agree that happiness is necessarily associated with the thrive mode and depression—with the survival mode. People who are engaged in a struggle to survive are rarely depressed, they don’t have time for this. On the same “intuitive” basis I can argue that sloth/ennui is the consequence of the thrive mode and motivation/energy is the consequence of the survival mode.
I don’t believe that “direct relationship between happiness and political ideology”. I’ll need to look at the data and at how does the analysis control for other variables before I might be convinced it’s real. However I do have a strong prior that this is not so.
We already know that a political dichotomy is game-theoretically mandated by the U.S. voting system. Which means we don’t particularly need a separate and additional explanation for political dichotomies, and so I’d want independent evidence of other explanations.
Maybe depression is like the appendix, or a congenital skin rash, or food allergies, or male nipples, or the optic nerve being in backwards, or ingrown toenails, or the coccyx. In other words, sometimes evolution is just stupid.
The implication being that the coccyx is useless? How so?
Indeed, Wikipedia says the coccyx is far from useless in humans:
I don’t think the US political system explains that much, countries with different systems also have political dichotomies, and how much they correspond to the US ones has more to do with culture and society than with political institutions.
It seems to me that some people really are stuck in a “survival mode” and some people are stuck in an “exploratory mode”, and that this influences many of their life choice. The former will approach possible changes by saying: “This is far too dangerous. The life is not perfect as it is now, but we can manage.” The latter will approach possible changes by saying: “Change is exciting. What could possibly go wrong? You’re a chicken!”
This is probably a consequence of some evolutionary algorithm: “When in danger, play it safe, when in abundance, explore new possibilities.” Except that one does not update immediately on their current situation, it takes more time, and perhaps the situation at some specific age has a long-lasting impact. For humans, their setting influences how they perceive the world, so even if the original situation is gone, they can filter their inputs to make themselves believe the situation remains.
Another, independent part is how these modes map to political opinions. Seems to me this is not completely straightforward, and may be just a consequence of a specific political system, or a specific history of political alliances in the past. -- E.g. it happened a hundred years ago, for random reasons, that many Blues adopted a “survival mode” agenda and many Greens adopted an “exploratory mode” agenda. This created a positive feedback loop, because young people preferring the “survival mode” were more attracted to the Blue politics, and young people preferring the “exploratory mode” were more attracted to the Green politics, and in turn they made the agenda of the parties more extremely “pro-survival” or “pro-exploration”.
The specific mapping of “right: survival; left: exploration” may be correct for USA, but it cannot be generalized for the whole world. Which suggests that the mapping is either arbitrary, or shaped by some US-specific factors.
For example, in Slovakia the communists recently won the election using the slogan: “People deserve certainties.” (google translate). On the other hand, decriminalization of marijuana and homosexual marriage are considered right-wing topics in Slovakia (more precisely, they are topics of one specific right-wing party; all other parties avoid these topics completely). So to me it seems that the political left is fully “survival mode” here (Communists, Slovak Nationalists), and the political right is divided into “survival mode” (Catholics, Hungarian Nationalists) and “exploratory mode” (Libertarians). I am using the words “left” and “right” here to reflect how those parties identify themselves or who are their typical coalition partners.
Why is it so? Well, we had a history of communist rule here, that’s the difference. Seems to me that the “survival mode” people are scared of the present, and attracted to some part of history which they idealize. So if a country did not have a communist history, communism will be attractive only to “exploratory mode” people; but if a country had a communist history, “survival mode” people will be attracted to it, simply because it is a history. They remember the mandatory First May parades with red flags, strong rule of The Party, mandatory employment, newspapers without scandals… and they want it all back. (In a different country, the same type of people would want mandatory prayers, strong rule of The Church, church-approved materials without scandals, etc.) On the other hand, the whole “free market” concept is an exciting new thing here, which attracts the “exploratory mode” people. Free markets, freedom for homosexuals, freedom to use marijuana… all these things appeal to the “exploratory mode” people, so they get groupped politically together.
Which leads me to conclusion, that there is a difference between “survival mode” people and “exploratory mode” people, and this difference is reflected in the structure of political parties. However, the connection between a specific mode and a specific party seems to be mostly a historical coincidence (either “we already tried this, but never tried that” or “hundred years ago, our enemies joined the Blues, so we had to join the Greens”).
I don’t know how well it generalizes outside the US, but here, I would expect the correlation between happiness and ideology to reflect the fact that the political right effectively employs the just world fallacy in far more explicit ways than the political left. More advocacy of wealth redistribution is correlated with thinking that the status quo distribution is unfair. Plus, In the US specifically, both old-fashioned Calvinist determinism and modern prosperity gospelism are associated more with the right than the left.
People who think either or both of “the status quo is better than the alternative” or “it’ll all even out in the afterlife” are likely to be happier than people who think the world as it is is unjust.
I’ve noticed a depressive tendency on the left, but I think it might be an accidental bad strategy.
Please take this as my observations which may be shaped by what I choose to read.
It seems to me that left wingers have a habit of having every bad thing in the world remind them of every other bad thing. This may be a lack of respect for specialization, or it may be a side effect of competition for mind space, where everyone is trying to recruit everyone else for their preferred cause.
I also think that efforts to puncture grandiosity have overshot to the point of causing despair. That one’s race or country or species aren’t reliably wonderful (no doubt true) isn’t the same thing as them being the worst thing ever.
Another aspect is that right-wingers get angry. Left-wingers get angry too, but their goals are so large and absolute that they also get sad.
Also, I think left-wingers didn’t used to have the inclination towards depression, or at least the earlier pro-labor, pro-civil rights bunch don’t strike me as being especially unhappy.
I think depression is the wrong word there. Too clinical, too enduring.
I think you would be on stronger footing to use optimism and pessimism as strategies for thrive/survive; happiness and sadness are the feedback mechinisms letting you know if you are using the right strategy:
optimistic and wrong = failed expectations-> unhappy
optimistic and right = opportunity taken and paid off->happy
pessimistic and wrong = missed chances->unhappy
pessimistic and right = danger avoided
Regarding: “rightism is what happens when you’re optimizing for surviving an unsafe environment, leftism is what happens when you’re optimized for thriving in a safe environment.”
My suggestion would be that politcal beliefs in general are for optimizing survival and fairness. Both ends of the spectrum want the world to be safe. Both ends believe in fairness. But the threats are coming from different places.
Yvain makes a major assumption in his post that in apocalyptic scenarios people turn on each other. But this is something I would say we find more in films than in real life, except in cases of insider-outsider groups, like pogroms. At the same time, it is a defining factor in a person’s political views. If we were to argue about this, we’d be arguing about politics, and I think we consider that off-limits here? Anyway, in the abstract:
It seems to me that the defining characteristic of left and right is how much authority, power, structure there needs to exist for there to also be order. “People don’t rape, kill, and steal because the government/god/[structure] stops them.” vs. “People don’t rape, kill, and steal because they don’t want to, for the most part.” So the methods of optimizing for a safe environment with these opposing views point in opposite directions. Are the structures and hierarchies holding our society together, or are they its biggest threat? Build them up, or tear them down? Obviously, there are more moderate positions.
Basically, these are not safe vs. unsafe, but about the perception of where that source of danger will be found.
Regarding your last two points: “happiness is what happens when things are going well, depression is what happens when things aren’t” and “the world is basically dangerous” etc.
Thinking makes it so. A person on the right perceives their enemies as far away, and weaker because their idea of an enemy is likely a foreigner in another country without a large military. They never have to personally interact with those they consider enemies (unless left leaning citizens qualify). A person on the left perceives their enemies as nearby, and stronger: the police, the courts, possibly many societal institutions and corporations, capitalism, sexism, their boss, etc. which are at least seen if not experienced in some aspect or another.
What I would suggest is, the closer and stronger your political opposition is perceived to be, the less likely you will be happy, and vice versa.
Exactly. Sometimes you even find both sides using the same applause lights, but with different political connotations.
For example, “freedom”. Everyone agrees that freedom is a good thing. Only for some people, “freedom” means freedom from opressive government, which can be achieved by a free market. And for other people, “freedom” means freedom from a demanding employer, which can be achieved by a government regulation and taxation. Both sides will argue that their definition of “freedom” is the right one.
Below that, there is perhaps the same emotion. We all would prefer to be not commanded, not pushed around, free to choose how we spend our days. It’s just how we evaluate the risks, based on our experiences and the model of the world.
It seems that the Right/happy Left/angry-and-depressed distinction may have been around for a long time. The English Roundheads were infamously grim and dour, the Cavaliers laughing and cheerful.
As for Yvain’s post, he is buried so deep in Whig history (believing that the whole arc of history is leading to his own leftist beliefs taking over the world) that his whole thought process is absurdly provincial and hopelessly flawed. The left’s ascendance in western countries over a period of a couple centuries in no way implies that the it is destined to control the future.
To be fair, western civilization is the best candidate for a singularity of some kind, and if anyone’s values got preserved (by an FAI or just high-tech conquering all opposition) it would probably be whoever fooms first (unless we all die instead.)
I think that Yvain is reversing good times and bad times. In the EEA people generally did not get their food from things that they made or planted. They got their food from things that they hunted or searched for. Which means that they didn’t have anything to defend in bad times. So this is the way I see it:
Survival Mode/: What you enter when you see a reasonably clear way to survival. You continue to do things the way you know that it works. You try to preserve the status quo, because the status quo is to your advantage. You accumulate wealth if you can because 1)there’s a reasonable chance that you’ll get to use it and 2)it might help you stay in your good situation longer. You are ready to fight other tribes to keep your place in the sun (or more realistically your place by the water), but you don’t go looking for trouble. etc
Exploration Mode/Normality: What you enter when times get tough. You move on (call it “explore” if you think that sounds better) because the place you are can’t sustain you or is dangerous. You look for new sources of food and other things you need because the old ones are not enough. You don’t care about many material things because anything you keep will encumber you, thus reducing your chances of surviving even further. etc
People who are depressed aren’t very productive. Depression is no useful survival mode.
I don’t think that people are so easily devideable into political categories.
I can imagine plenty of situations where passively being sad and not going out and attempting to be productive are safer. Maybe depression is an alternative to cabin fever? Long hard winters are easier to survive if you’re too depressed to go out and possibly freeze to death and instead stay in your cave/yurt eating the most easily accessible saved food. That would explain the evolutionary value of Seasonal Affective Disorder.
There are more people with SAD who get depressed in the spring than in early winter or the darkest part of winter.
Another thought from hypothesis land: maybe a little depression is a chance to retire and regroup, but in the original environment, there was more that would pull people out of depression—social contact and work that obviously needs to be done.
This is about mild-to-moderate depression, though. Major depression seems to be different, and not good for anything.
I wonder what the highest rate of occurrence something can have and still be an evolutionarily detrimental spandrel, rather than an evolved response we just don’t understand.
If the harm can be something that only occurs under some circumstances that didn’t obtain in the EEA, 100%
Is someone of you here in a position, that you review research papers for scientific journal ? Has it ever happened to you, that you had an impression, the authors were lying ? How did You handle that ?
Has anyone used Beansight? It seems like it could be a replacement for things like predictionbook with a few imrpovements.
I don’t see how only being to vote up or down is an improvement over predictionbook’s ability to pick a number between 0 and 100.
A lot of the predictions are pretty vague such as “On 01 September 2013, Homework will become virtual”. How the heck is this question to be scored?
I’m considering trying a diet that involves fasting for 2 days a week. What do people think about intermittent fasting diets?
ETA: I don’t have a great deal of weight to lose, and if I’m honest I want to lose it for aesthetic rather than health reasons.
I think some people tolerate them better than constant calorie limitation.
Chris Kresser says that some of his patients who do intermittent fasting have blood sugar regulation problems. More.
I’d read somewhere that IF is a bad idea if you’re already under stress. IF leads to increased cortisol and can result in adrenal fatigue. I thought I saw it at Chris Kresser, but I can’t find it there. However, there are other articles which are concerned about the cortisol angle.
I’d say it isn’t an awful idea—IF does work for some people, but maybe you should be getting blood tests and have some way of remembering to drop the IF if your life gets stressful.
I did that for a few months a couple of years back; I fasted every weekend. Big thing I learned from it is that I become extremely grumpy/unpleasant when I don’t eat for two days in a row. (I wasn’t doing it to lose weight, it was an attempt to gain better control of my automatic processes. I don’t think it actually worked to any significant extent.)
First couple of times I permitted myself an unlimited quantity of clove tea. After the first couple of times clove tea began to taste like dish soap and I stopped drinking it.
Does anyone have a solution to Post Narcissism?
I do this, and still wind up only regretting what I wrote after someone’s pointed out a huge flaw with it. (Oddly, people tell me not to stress about saying things just right. Since I still screw up on a semi-frequent basis, worrying about it apparently isn’t that helpful. )
I do this. I don’t regard it as an issue, however; I’m looking for faults, but as a result my writing has steadily improved over the past few years; it’s part of the reason I no longer write using Old English grammar.
What exactly is the problem here?
Time loss due to excessive interest in reading something just because it is one’s own creation, to the point where one doesn’t think it is good anymore.
Stop writing?
That would only be a good solution if I was writing for my own delight, and angry about the delight when it actually happened. Most people write so that other people read what they wrote (I think). This is how I write academic or magazine articles, what I did with my book, and what happens to video content I post anywhere. But doesn’t happen with posts and comments, those somehow require me to go back and read them, sometimes up to 10 times. Cyan and Komponisto seemed to have the same problem.
A point of metaphysics:
It is impossible for anyone to force anyone to do anything: you can adjust their incentives, but it’s always open to someone to just refuse. And this is metaphysical because it’s true no matter how the world is constituted: imperious curses wouldn’t make this possible either.
But points of metaphysics are mostly (if not always) misunderstandings of some kind. What am I misunderstanding?
On Monday, it’s impossible for anyone to force anyone to do anything. On Tuesday, it’s not. What’s the difference between Monday and Tuesday? What do you expect to see on Monday that you don’t expect to see on Tuesday or vice versa?
Is this a riddle...Hmm, why would Tuesday be different?
It’s not a riddle, it’s a heuristic for encouraging specificity.
Oh. I don’t think it will work with this. My whole problem is that I’ve arrived at a conclusion that can’t be false under any circumstances, and this seems to me like a likely misunderstanding.
Alternatively, your conclusion is trivial—that is, it doesn’t actually say much of anything. This seems to be the case here; you’re evaluating the denotative meaning of the sentence (what it literally means). It -seems- non-trivial because then you’re sneaking in the connotative meaning of the sentence—what it implies.
Could you explain this in specifics? What denotation and what connotation?
I assume you argue the imperious curse doesn’t force somebody to do something because they’re not really doing it (likewise with grabbing their hand and hitting them with it, provided you’re sufficiently stronger to do so). Likewise, any kind of mechanism of forcing somebody to decide to do something still leaves them open to refuse. This is the denotative meaning.
The connotative meaning is pretty subjective, but could be that we’re subverting somebody else’s will. If you kidnap somebody’s child and ransom them, sure they still, strictly speaking, have a choice in the matter, but in any realistic sense they don’t.
Hm, that sounds like a good answer to me.
Can you force your computer to do anything? Can the computer refuse to do what you want? Of course the computer can crash. Does that count as refusing to obey your commands?
If you don’t think the computer has the free will to refuse your commands, why do you think you do? Because your brain runs on neurons and the computer runs on silicon?
There are many ways to influence other people that don’t have something to do with adjusting incentives. Just look into the psychology literature.
The ability to refuse needs the knowledge that someone tries to influence you. On example: Andrew Berwick put a lot of effort into people trying to read his book. He studied the way ideas spread on the internet. Conspiracy theorists do a lot to spread certain idea. Andrew knew that conspiracy theorist like to talk about Freemasons.
Andrew then went to four freemasons meetings and put Freemason images on his facebook account. As a result all of the conspiracy theory people had their Freemason story when Andrew committed his terrorist act.
No one of the conspiracy folks got the idea that those images were specifically crafted to play them because the conspiracy folks don’t think that someone would treat them in that way.
The couldn’t refuse in a meaningful sense because they were ignorant.
That’s a good point; trickery does seem like a kind of force.
A funny comedy sketch about techno-immortality and being the “last generation to die.”
After reading some of the comments in the discussion on souls, I got to thinking about near death experiences, in the context of dream thought patterns (based entirely on my observations about how I think in dreams). This led to me imagining what an NDE might be like, which somehow ended in hypothetical dying me managing to overcome the absolute horror of realizing what was going on long enough to think “Maybe there’s a way I can think that will help keep the information in my brain in tact a little longer...”. (Obviously, there’d be some serious cognitive impairment to work against.)
I almost posted this to the munchkin ideas thread, but decided that it’s way too hypothetical and has very little science to build on. It strikes me as an interesting area to look into, though I’d not expect it to be much use at increasing one’s chances of revival. I’m wondering if there is any potentially useful information on the subject, though. For example, are there any accounts of NDEs from skilled lucid dreamers? Have there been FMRI studies on terminally ill patience as they died? I’d rather not get into a situation where my only choices are what to dream about and how long before ceasing to exist, but making the most out of it seems like it’d be a crowning moment of munchkinry, if one that no one would ever know about.
I’m teaching some classes for a test prep company in a town 2 hours away. They’re paying me fairly for my expenses and travel time, but it still feels like kind of a waste—it’s like 20 hours a week! Of course most productive things cannot safely be done while driving, but listening is a notable exception.
Can anyone recommend some good educational podcasts, or other free downloadable audio that will make me better in some way? I’m working on learning Spanish, so that seems like a good place to start.
Acquire material from “The Great Courses”.
Why aren’t teachers as respected as other professionals? It’s too bad that the field is lower paid and less respected than other professional fields, because the quality of the teachers (probably) suffers in consequence. There’s a vicious cycle: teachers aren’t highly respected --> parents and others don’t respect their experience -->no one wants to go into teaching and teachers aren’t motivated to excel --> teachers aren’t highly respected.
It’s almost surprising that I had so many excellent teachers through the years. The personal connection between teachers and their students must be particularly strong, because the environment doesn’t seem to be very motivating for teachers to want to be excellent at what they do.
Based on anecdotal evidence. I just think it’s too bad.
Really? The BBC thinks they’re the second highest status profession, just after professor (and before CEO).
They’re significantly better paid than you would expect given the qualifications required to be a teacher (none).
I’ve gotten the impression that the respect of teachers in the US is way below what it is in the UK (or Finland, for that matter).
To all three of you: respected by whom?
People in general?
I think that’s a little vague. My impression is the following (in the US):
Many people will claim to respect teachers in the abstract. Charitably I think this is based on fond memories of their favorite teacher, less charitably based on a sense that respecting teachers is something people feel like they ought to be seen doing.
However, actually being a teacher (e.g. on a date) is not likely to garner a great deal of status relative to other professions.
Also, students generally don’t seem to respect their teachers (but this is also vague).
Agree with the OP that the basic problem seems to be a vicious cycle (again, in the US).
It´s probably something like Linus’s “I love humanity … It’s people I can’t stand”.
I wonder if there isn’t the opposite effect for some group, like CEOs, where people may have somewhat negative feelings about the abstract concept, but show a great deal of respect in person.
It’s even worse in here Italy—ISTM that most people would agree that most teachers these days are incompetent.
Another question would be whether people who interact with teachers qua teachers — for instance, parents of students; coaches, principals, or other school employees — treat them as moral and social equals, or as inferiors. It seems to be a common complaint from schoolteachers that some parents, for instance, consistently treat their children’s teachers as inferiors.
I just realized I generalized too much. In Canada, you require a four-year Bachelor’s of Education specifically (same as for being an engineer, and more than most trades). The average salary seems to be about the same as in the US.
Here, have a summary. Until fairly recently, teaching was something you did until you got a real job, and that perception lingers. Add to that some peoples’ resentment of teachers-as-authority or teachers-as-experts. Add to that the suspicious fact that male teachers are more well-respected than female teachers, but the profession is mostly women and is seen by a scary number of people as “women’s work.”
As a quick answer, I would say people appreciate what they pay for, and do not care about what they may have for free. Professors are respected, and even teachers at private schools, as are professional tutors. But when teachers are used, it usually mean public school teachers, who are essentially free (taxes notwithstanding). To spread science, keep it secret extends to education and educators as well. If educators were rare, expensive keepers of knowledge, then they would be coveted.
And of course, since the government is the largest employer of teachers, they are able to keep their salaries low, leading to a decrease of prestige and quality of teachers. Which leads to a vicious cycle downwards.
The relation is there, but I think the causality is the other way round. You are more willing to pay to people for their service if you respect them.
Evidence: 1) People also pay for public schools, by paying their taxes. But even if the law required them to pay a fixed amount of money to the public school directly, situation would remain the same. Paying is not essential here, paying voluntarily is. 2) Sometimes people pay for a product or service simply because they need it and they can’t obtain it otherwise. For example, people pay to prostitutes, but typically don’t respect them. 3) If you respect your friend as an expert, and your friend explains you something for free, you don’t stop respecting them. -- This is why I think respect comes before payment.
In my opinion (which is supported by my first-hand experience as a teacher), the most direct impact on teachers’ status had the removing of almost everything that our ape brains perceive as status-related. Teachers are not allowed to punish students physically. (I am not saying this is a bad thing. I am just saying it removes a part of what would be obviously status-related in the ancient environment, and our brains notice that.) Students are allowed to disobey and even offend teachers (within some limits they carefully explore and share with each other) with virtually no consequences. This is responsible for maybe 80% of the change. -- Note: Some people even consider teachers’ status lowering a good thing! They usually describe a (strawman?) tyrrant, and explain a need to make people more equal. In reality, in many schools the teachers are already at the bottom of the status ladder, and the most agressive students are at the top, bullying their classmates and teachers.
The remaining 20% is related to the fact that lower degrees of education are no longer a status symbol (because almost everyone has them), and even the higher degrees become a weaker evidence (as countries participate in a pissing contest about who has more % of population with a university degree by lowering the standards); many employers care about having a diploma but don’t care about specific knowledge (especially the government is guilty of this; may be different in your country), which makes teachers and schools replaceable commodities, so most customers only care about the price. Having better education (assuming the same diploma) seems to have absolutely no consequences; people underestimate the inferential distances and say everything is online anyway. And then we have the vicious cycle of lower respect driving some good teachers away, which reinforces lower respect for those who stayed, who are suspect that they didn’t have a better choice.
Warning, awkward self disclosure, grief and death
I normally hate this kind f post, but honestly the lesswrong community are the only people I trust to give me useful advice, A relative of mine is dying what is the best way for me to deal with this?
Say everything that you ever thing you might otherwise ever eventually regret not saying. Including goodbye.
Has Rossi done it? Will he be Emperor after all?
Cold fusion reactor independently verified, the arxiv pdf is here.
(...)
More articles, pro, contra.
The older article at http://scienceblogs.com/startswithabang/2011/12/05/the-nuclear-physics-of-why-we/ basically demolishes any possibility that it is anything but a hoax. There is literally no way that the reaction can be producing the observed (normal Earth crust) ratio of copper isotopes, even if you ignore the complete lack of radiation.
That, plus refusing to unplug the device while it was in operation or have the power pass directly through a measuring device rather than using magnetic effects to measure it, pushes it into obvious hoax territory.
Tonight I had a strange dream. Bob got a message from Alice and said “Who the f*ck is Alice?!”.