I want to thank this community for existing, all the people founding it, the people contributing to it and all the stuff linked here. I may not like all the topics or agree with all the opinions posted here. Nor may I find use of most of the stuff I read around here. But at least I don’t feel so alone anymore.
How well do medical doctors fare in terms of health outcomes compared to people of similar social economic status and family history? Is there a difference between research doctors and practising doctors? What about nurses, is there a notable difference too?
This question is posted within the context of “how big is the effect of medical knowledge on personal health?” and the assumption that medical doctors should represent the upper end of the spectrum. Other medical professionals should represent data points in between. All this together should hint at the personal use of medical knowledge in some kind of unit.
This study seems to go quite a ways towards answering your question:
Among both U.S. white and black men, physicians were, on average, older when they died, (73.0 years for white and 68.7 for black) than were lawyers (72.3 and 62.0), all examined professionals (70.9 and 65.3), and all men (70.3 and 63.6). The top ten causes of death for white male physicians were essentially the same as those of the general population, although they were more likely to die from cerebrovascular disease, accidents, and suicide, and less likely to die from chronic obstructive pulmonary disease, pneumonia/influenza, or liver disease than were other professional white men...
These findings should help to erase the myth of the unhealthy doctor. At least for men, mortality outcomes suggest that physicians make healthy personal choices.
-- Frank, Erica, Holly Biola, and Carol A. Burnett. “Mortality rates and causes among US physicians.” American journal of preventive medicine 19.3 (2000): 155-159.
The doctors had a lower mortality rate than the general population for all causes of death except suicide. The mortality rate ratios for other graduates and human service occupations were 0.7-0.8 compared with the general population. However, doctors have a higher mortality than other graduates. The lowest estimates of mortality for doctors were for endocrine, nutritional and metabolic diseases, diseases in the urogenital tract or genitalia, digestive diseases and sudden death, for which the numbers were nearly half of those for the general population. The differences in mortality between doctors and the general population increased during the periods.
-- Aasland, Olaf G., et al. “Mortality among Norwegian doctors 1960-2000.” BMC public health 11.1 (2011): 173.
EDIT: I added a second study and cleaned up the citations.
Somewhat related, I remember reading an article claiming that Doctors are more likely to opt out of life-prolonging treatment. Not really well-cited, but seemed like an interesting claim. That end-of-life hospital care is so bad that they would choose not to do it.
Both sides are totally opposed, yet see the same fact as proving they are right.
If redheads are 10 times more likely to be in jail for violent crimes, it is evidence for both “redheads are violent” and “judges hate redheads”—and both might be true!
And “redheads are violent” and “judges hate redheads” are not totally opposed, they only look that way in a context where they are taken as arguments in support of broader ideologies who, them, are totally opposed (or rather, compete with each other so oppose each other).
More generally, many facts can be interpreted different ways, and if one interpretation is more favorable to one ideological side, that side will use that interpretation as an argument. Seen like that, facts looking like they support “opposite sides” seems almost inevitable.
Confirmation bias (also called confirmatory bias or myside bias) is the tendency of people to favor information that confirms their beliefs or hypotheses.[Note 1][1] People display this bias when they gather or remember information selectively, or when they interpret it in a biased way. The effect is stronger for emotionally charged issues and for deeply entrenched beliefs. People also tend to interpret ambiguous evidence as supporting their existing position. Biased search, interpretation and memory have been invoked to explain attitude polarization (when a disagreement becomes more extreme even though the different parties are exposed to the same evidence), belief perseverance (when beliefs persist after the evidence for them is shown to be false), the irrational primacy effect (a greater reliance on information encountered early in a series) and illusory correlation (when people falsely perceive an association between two events or situations).
Attitude polarization, also known as belief polarization, is a phenomenon in which a disagreement becomes more extreme as the different parties consider evidence on the issue. It is one of the effects of confirmation bias: the tendency of people to search for and interpret evidence selectively, to reinforce their current beliefs or attitudes.[1] When people encounter ambiguous evidence, this bias can potentially result in each of them interpreting it as in support of their existing attitudes, widening rather than narrowing the disagreement between them.[2]
Everyone knows utilitarians are more likely to break rules.
(This is mostly a joke based on the misspelling. I know a sophisticated utilitarianism would consider the effect of widespread lawbreaking and not necessarily break laws so much as to be overrepresented in prison)
That’s only once you reformulate grandfather’s scenario as “If the justice system is unbiased, racism is justified.”. It surprises me that father would cut grandfather’s class along its joints… can mstevens think up examples of his class not covered by father, or nonexamples covered by father?
I have sometimes seen arguments that fit this pattern, including on Less Wrong —
Your disagreement with me on a point of meta-level theory or ideology implies that you intend harm to me personally, or can’t be trusted not to harm me if the whim strikes you to do so.
It seems to me that something is deficient or abusive about many arguments of this form in the general case, but I’m not sure that it’s always wrong. What are some examples of legitimate arguments of this form?
(A point of clarification: The “meta-level theory or ideology” part is important. That should match propositions such as “consequentialism is true and deontology is false” or “natural-rights theory doesn’t usefully explain why we shouldn’t hurt others”. It should not match propositions such as “other people don’t really suffer when I punch them in the head” or “injury to wiggins has no moral significance”.)
One mistake is overestimating the probability that the other person will act on their ideology.
People compartmentalize. For example, in theory, religious people should kill me for being unbeliever, but in real life I don’t expect this from my neighbors. They will find an excuse not to act according to logical consequences of their faith; and most likely they will not even realize they did this.
(And it’s probably safest if I stop trying to teach them to decompartmentalize. Ethics first, effectiveness can wait. I don’t really need semi-rational Bible maximizers in my universe.)
I don’t think the problem with such argument are so much that they are wrong on a factual basis, but they prevent the discussion of some important ideas.
A feminist can argue that ze can measure with a implicit bias test how biased people are and that the argument that you are making is going to make the average reader more biased. Ze might be completely right, but that doesn’t mean that your argument is wrong on a factual level.
Once you move to political consideration that certain things are not allowed to be said because they support harmful memes, you are in danger of getting mind killed and be left with a world view that doesn’t allow you to make good predictions about reality.
My heart sinks whenever I see these sorts of discussions on LessWrong. Could we analyze sometime we expect to be verifiable?
Making verifiable predictions about a missing airplane doesn’t seem all that difficult to me; what am I missing? For instance, is there something wrong with this one?
My hosting company got annoyed because something was taking up too many resources. I did what the nice person on the telephone suggested (installed some WordPress plugins, uninstalled others) and it’s back online now. If the problem recurs I might have to restrict commenting for a while until I can figure out a more permanent solution, but for now everything’s fine.
I checked the site, and got a 403. Is that what you’re talking about? When did you first notice it? The latest Google cache is from Mar 18, 2014 17:37:21 GMT.
Peripherally associated with the Gwernosphere
What Universal Human Experiences Are You Missing Without Realizing It?
Posted on March 17, 2014 by Scott Alexander
Remember Galton’s experiments on visual imagination? Some people just don’t have it. And they never figured it out. They assumed no one had it, and when people talked about being able to picture objects in their minds, they were speaking metaphorically.
And the people who did have good visual imaginations didn’t catch them. The people without imaginations mastered this “metaphorical way of talking” so well that they passed for normal. No one figured it out until Galton sat everyone down together and said “Hey, can we be really really clear about exactly how literal we’re being here?” and everyone realized they were describing different experiences.
The concept of heroic responsibility seems to be off-putting for some people, mostly because it looks like it puts the blame of every single bad thing at the feed of an individual. Generally, I’ve answered this objection by telling them that they don’t need to look that broadly and that they can apply the concept at a smaller, everyday scale. So instead of worrying about solving depression forever, you can worry about making sure a friend gets the psychological help they need and not telling yourself things like: “It’s their parent’s/partner’s/doctor’s responsibility that they get proper help.”
Is this a correct way to explain the concept or am I strongly misrepresenting it?
Maybe it’s not a problem with explaining the concept per se, it’s just that its consequences are unpleasant. Feels like you are telling people that heroic responsibility is one of the possible choices, a one that they didn’t make, but could have made, and perhaps even should have made. -- There probably are good reasons why most people don’t take heroic responsibility, but these are difficult to explain. So it’s easier to pretend that the whole concept does not make sense to you.
Also, it’s not my responsibility to understand the concept of heroic responsibility. :D
EDIT: It may be related to the status-regulation emotion that apparently some people feel very strongly, and some people don’t even know. The problem with “heroic responsibility” might simply be the emotional reaction of: “Who do you think you are that you even consider taking more responsibility than other people around you?! That is a task worth of a king; and you obviously aren’t one. And you try to explain it to me, but I am also not a king; I don’t even pretend to be, so… this whole stuff doesn’t make any sense. You must be insane.”
It seems most people don’t feel good about being considered personally responsible for all the bad things in the world. Especially people who already suffer from anxiety of some kind.
But it’s a worthwhile thing to know about, even in everyday life. I work at a homeless shelter at the moment, and I’ve occasionally gone out of my way to help people because I knew about heroic responsibility. Even if I’m not tackling homelessness as a general problem, it has still helped me become a better me.
This seems completely incompatible with the actual concept, but certainly more palatable.
The problem with the concept as a whole is that it imposes an impossible requirement → I will be maximally guilty whatever I do → why even bother doing anything. Humans (with rare exceptions) just aren’t built such that heroic responsibility works for them. If I’m only responsible for close relatives and friends plus some limited charity, I can actually fulfill my responsibilities so there’s a reason to try, and so unheroic responsibility is a better model to live by unless you want to impress LWers.
The way I would put it: Doing the right thing is hard. It doesn’t mean one should give up without trying. Also, something can be done better even if it’s not done perfectly.
In what contexts do you try to convince other people of heroic responsibility? Why do you want to frame it that way?
I think the concept comes on LW from HPMOR. Specifically from:
“You could call it heroic responsibility, maybe,” Harry Potter said. “Not like the usual sort. It means that whatever happens, no matter what, it’s always your fault. Even if you tell Professor McGonagall, she’s not responsible for what happens, you are. Following the school rules isn’t an excuse, someone else being in charge isn’t an excuse, even trying your best isn’t an excuse. There just aren’t any excuses, you’ve got to get the job done no matter what.”
Most people reject that kind of responsibility. It’s no accident that the person who wrote those lines is on a quest to safe the world.
It also in some sense quite telling that the character who speaks those lines is a child who hasn’t learned the rules about what is and what isn’t his business.
Taking responsibility for the life on another is invasive and if you look at the story than Hermoine isn’t that happy that Harry tries to be the prime hero.
Half a year ago I sat in a café and had a conversation about personal development. As “collateral damage” the words I spoke brought up a deep personal issue in a stranger next to me and the person sort of angrily started an interaction with me. I went into a direct 10 minute NLP intervention.
Hopefully it helped but I didn’t gave the person afterwards my contact details to prolong the interaction with him but just told him to seek help elsewhere.
That particular interaction burned my out and it was okay because the other person started it.
If I’m however sitting in public transportation and the person sitting next to me is crying after ending a telephone conversation I don’t take it as my responsibility to fix the issue.
There are days were I might do a bit on a nonverbal level but I wouldn’t impose myself into the situation by speaking words. I could try to play the role of a hero, but I often chose against it and that’s fine.
When it comes to psychological help for friends, then I think it’s good to offer it. It’s good to explain to someone which choices are available to them. On the other hand everybody has a right to feel bad and if a friend wants to feel bad and/or not be helped by myself, then it’s not my business to break through and make him feel well.
That also means that if people around you don’t want to take responsibility, don’t force it on them. It often much better to lead by example. Tell others stories about how you feel great because you made a difficult decision to practice heroic responsibility. Bonus points for picking stories that are not out of reach for your audience ;)
In what contexts do you try to convince other people of heroic responsibility? Why do you want to frame it that way?
I’m not exactly trying to convince them, just trying to explain the concept. It’s something that occasionally comes up when you mention Less Wrong somewhere on the internet. “Less Wrong, aren’t those the guys that think you are personally responsible for children dying in Africa?”
That also means that if people around you don’t want to take responsibility, don’t force it on them. It often much better to lead by example. Tell others stories about how you feel great because you made a difficult decision to practice heroic responsibility. Bonus points for picking stories that are not out of reach for your audience ;)
“Less Wrong, aren’t those the guys that think you are personally responsible for children dying in Africa?”
In those cases, it’s useful to explain the advantages of that mindset. Knowing you saved a child in Africa from dying through malaria feels really great. It makes you feel powerful and makes you feel agentship.
Happiness research shows that giving to other people often makes you more happy than buying possessions for yourself.
Um. The one time I donated to a charity (as a child), I immediately felt terrible guilt. My family was poor at the time, and I realized my parents might have needed those $300 of saved-up allowance. When I save money, I reduce the risk that I will be a burden to those close to me, and that’s really fucking valuable.
Laser eye surgery (LASIK) is being suggested by several people on LessWrong, who suggest it is a costly procedure that has high likelihood to improve your life. I do not think this is a good trade-off across a life time, because presbyopia.
Almost all humans experience presbyopia. This is age-related deterioration in the ability of the eye to adjust focus. In history, the biggest effect for most people is reduced ability to read, but now it also is affecting the ability to use computers.
If you have myopia (short sight), you can not see distant objects without distance glasses. However, myopia means that you will retain the ability to focus at close distance for longer as presbyopia develops.
So LASIK surgery is not only trading money for better distance vision: if it works, you get better distance vision but also you get worse close vision as presbyopia develops.
How long will you live in each condition?
According to last year’s survey, the mean age is 27.4 (stdev 8.5). Presbyopia usually develops from age 40. Let us disregard the possibility of uploading because the mean date for the singularity is 2150. Life expectancy for a 27-year-old US male is 50 years. This is likely an underestimate. It is a period life expectancy, not a cohort life expectancy, and we do expect future improvements in longevity. It is, too, a population average. LessWrongers are smarter and better educated than average, which is associated with longer life. We will nonetheless use it for illustration.
So, very roughly, the trade for an average LessWronger considering LASIK is reduced need for glasses for 13 years against increased need for glasses for 37 years, or longer. Also, I guess that most LessWrongers with myopia spend much more time reading and using computers than doing tasks that require distance vision. I also guess they value those activities more.
Therefore this does not seem a good trade to me even if it were free of cost and risk.
Have I made a mistake? Has anyone more data? I know old people with presbyopia and I know young people with LASIK but I do not know anyone who has both. I guess some people on LessWrong have LASIK, but very few or no people have presbyopia, so I expect no people here with both. I guess that in universities there are faculty members who have both so I will ask on my next visits. Sometimes it seems even that all faculty members have myopia!
The need for glasses is a binary variable—either you need them or you don’t.
Someone with presbyopia always needs glasses because he can’t focus both near and far. It doesn’t matter whether that person started with good eyesight, or with myopia, or had myopia corrected by Lasik—he will need glasses.
You seem to think that presbyopia “corrects” myopia, that is not so. In geometric terms myopia “translates”, shifts your zone of focus closer to you so that it doesn’t include infinity any more. But presbyopia narrows your zone of focus, contracts it. You don’t get far vision back by overlaying myopia with presbyopia.
Sorry, I was not clear. I do not think that presbyopia corrects myopia. It even makes it worse at distance. But at close myopia can offset the effect of presbyopia.
As presbyopia narrows your zone of focus, you can not focus as close as previously. If you have myopia, you can focus at much closer distances than people with normal vision. Before presbyopia, this is not much use. When presbyopia develops, you have more close vision to spare, and can still read a book or a computer screen when people with normal vision would need reading glasses.
I will give a simplified example. A person has mild myopia, and needs a correction to focus at infinity, of strength −2.50 dioptres. They have successful LASIK surgery which gives them normal vision. They develop severe presbyopia, and require a correction of +2.50 dioptres to be able to focus at a comfortable reading distance. They now need reading glasses. If they had not had LASIK surgery they would need distance glasses but would not need reading glasses. They can gain the correction of +2.50 dioptres by taking off their distance glasses.
This is why I say, “[with successful LASIK] you get better distance vision but also you get worse close vision as presbyopia develops.”. What is more important may vary from person to person.
I have spoken to an optician about this, and she was mostly confirming this. This is only N=1 argument from authority though! She agreed of course that people with myopia and presbyopia can get good close vision by taking off their distance glasses. They do not need reading glasses, unless they wear contact lenses or have had LASIK. However, I must say that she did not not think that this would be a reason for most young people to not have LASIK. She said would certainly not advise LASIK for people with presbyopia or who would likely have it soon. She said also that people who wear contact lenses for myopia who get presbyopia, she will suggest under-correction in one eye, so that they have good distance vision in one eye and good close vision in the other.
Need for glasses is not a binary variable. It has more states than that. It must at least include ‘need distance glasses’ and ‘need reading glasses’.
Please keep us updated on your findings and decisions.
I’m also looking at a cost-benefit of LASIK and watching the reports on early adopters. High chance of an improved quality of life versus a small chance of significantly reduced quality of life. So this is how risk aversion feels like from the inside.
I have spoken to an optician as in reply to other message in this thread.
I have also asked at a computer science department in a European university. It was funny! Everybody in the department except some grad students had myopia. Many of the older faculty also had myopia. But nobody had LASIK. Sorry I did not count properly. My visit was for other reason. However, I can say the modal mode of vision correction was glasses for distance vision and taken off for reading or computers. There were also some people with varifocal glasses. There were also some people with contact lenses for distance vision and reading glasses for reading or computers.
[LINK] Sleep loss can cause brain damage (permanently lost neurons at least in mice). Even if the study is only about mice it nonetheless provides references to more general results:
I have a friend with Crohn’s Disease, who often struggles with the motivation to even figure out how to improve his diet in order to prevent relapse. I suggested he should find a consistent way to not have to worry about diet, such as prepared meals, a snack plan, meal replacements (Soylent is out soon!), or dietary supplement.
As usual, I’m pinging the rationalists to see if there happens to be a medically inclined recommendation lurking about. Soylent seems promising, and doesn’t seem the sort of thing that he and his doctor would have even discussed. My appraisal of his doctor consulations seem to be something along the lines of “You should track your diet according to these guidelines, and try to see what causes relapse” rather than “Here’s a cure all solution not entirely endorsed by the FDA that will solve all of your motivational and health problems in one fell swoop.” For my friend, drilling into sweeping diet changes and tracking seems like an insurmountable challenge, especially with the depression caused by simply having the disease.
I’d like to be able to purchase something for him that would let him go about his life without having to worry about it so much. Any ideas on whether Soylent could be the solution, in particular as to its potential for Crohn’s?
Consider helminthic therapy. Hookworm infection down-regulates bowel inflammation and my parasitology professor thinks it is a very promising approach. NPR has a reasonably good popularization. Depending on the species chosen, one treatment can control symptoms for up to 5 years at a time. It is commercially available despite lack of regulatory approval. Not quite a magic bullet, but an active area of research with good preliminary results.
There is no known “cure all solution not entirely endorsed by the FDA that will solve all of your motivational and health problems in one fell swoop.” A lot of people with Crohn’s seem to get some benefit from changing their diet. But the conclusions they draw always seem to contradict each other and in general the improvements are temporary. What it looks like to me (and at least on person with her own experience of the problem) is that radically changing your diet every few years is what you need to do.
Has anyone evr tried writing rationalist fiction of The Sandman? It’s a world that explicitly runs on storytelling patterns, but surely something can be done that illustrates the merits of rational thought even in such a setting. Rationalistis should win and adapt to the circumstances, even if they are a dreamscape.
Neil Gaiman himself, sort of. In his take on the Marvel Universe, the genius scientist character has figured out the world runs on storytelling logic instead of mechanical science and that he won’t ever be able to permanently change his friend turned into a superhuman rock monster back into a human since “guy permanently turned into superhuman rock monster” makes a better story element than “guy who was a superhuman rock monster but is all better now”.
For the current fic writers, I don’t see it working. Rationalist fiction writers range from middling to terrible in skill compared to traditional fiction writers at the top of their game like Gaiman, and trying to do this would basically be trying to one-up Gaiman in his own game at his home field. Not to mention that his world-building is a lot more self-aware already than the perennial nerd culture favorite soft targets like Star Wars or Harry Potter, like the Marvel 1602 example shows, so you wouldn’t have the nice obvious stuff to work with.
@First bit; that’s Discworld-grade brilliant, but does the knowledge spread, as it does in Discworld where everyone is Genre Savvy, or is Reed Richards still Useless? Of course, the problem with narrativium is that attempting to take advantage of it is likely to bite you back, but is it better than not knowing?
My very first attempt at writing, waaay back in 2007, was a story about a guy who was thurst into a Narrativium-based world, which was kind of like an immense, live Let’s Play for the entertainment a bunch of True Fae children (about three centuries old). Gran’t Morrison’s Action Comics come to mind. He’d be thurst into different genres asd different roles, and he’d have to start thinking vey meta, very fast, if he wanted to survive each story. He also wanted to get the damned show cancelled without giving a bad performance that would give the Showrunner a reason to fire him (literally), leading to many Springtime For Hitler moments, much to his frustration (sort of like Hideo Kojima and Metal Gear, Hideaki Anno and Evangelion, … but he also starts to take pride in his work (think Walter White and his blue meth)… At some point, he’d become aware that the way things made “storytelling sense” rather than “logical sense” also extended to the world outside the game. And then we’d have an Animal Man meets Grant Morrison type of meeting.
Most of the references I’m making here are retroactive; I had the idea much before I came in contact with them, but they’re handy for condensing. Another work I’d compare this to would be GAINAX’s Abenoashi Mahou Shoutengai. In fact, now that I think of it, it might be better to start the story as an Ontological Mystery, instead of having his slavery revealed to him right away as I initially thought.
Anyway, my point here is to say that, almost as soon as I wrote the first chapter, I went on hiatus, because I was keenly aware that I was biting much, much more than I could chew. Even now, I don’t think I have remotely what it takes to plan out and pull off such a project. Designing locomotives is such easy work in comparison; all you need is patience and method.
his world-building is a lot more self-aware
It’s extremely Post-Modern and illogical in every sense. It is awesome.
While the local meetup every month has been amazing every time I went, it didn’t have that much impact on my everyday life. Joining Habit RPG definitely has.
Priming can nudge one’s thoughts in certain directions; fashion can nudge others’.
It’s easy enough to try priming abstract, rational, far thinking with cool blue colours, Mozart, and by surrounding oneself with books… but is there any data on scents that nudge peoples’ modes of thinking in similar directions? Failing that, is there anecdata?
Taking action instead of spending a lot of energy in mental analysis is often rational.
In some cases, yes; but people tend to be willing to take action all on their own quite often anyway. I’m hoping to find something that nudges in the direction of far-mode thinking for those times when it /is/ appropriate to stop and think.
Crapshot: Say I have some kind of data per country and I want to use Python or other FOSS tools to plot this on a good looking map at the country level. Is there a good tutorial for this? I ask because I can do virtually anything else with Python like data manipulation and analysis or plots, so I’d be nice to do this with Python too.
R is free & open source, and widely used for stats, data manipulation, analysis and plots. You can get geographical boundary data from GADM in RData format, and use R packages such as sp to produce charts easily.
Or at least, as easily as you can do anything in R. I hesitate to suggest it to people who already do data work in Python (it’s less … clean) but in this sort of domain it can do many things easily that are much harder or less commonly done in Python. My impression is the really whizzy, clever stats/graphics stuff is still all about R. (See e.g. this geographic example.) There are many tutorials, some of them very good in parts, but it’s famously slippery to get to grips with.
I know about R. In fact I switched from R to Python because R is less … clean. It looks like I will have to use R for plotting though the rest of the stack will be in Python.
Thanks for the reply. Basemap looks like what I want but it is not. It is surprisingly easy to plot arbitrary data on a world map but I didn’t manage to e.g. colour the countries of the world by some metric. If you look into basemap, please keep me posted.
P.S.: There is some tutorial that shows how to colour Italy’s regions seperately but I did not manage to colour in the whole world.
What does Everett Immortality look like in the long term?
The general idea of EI is that there is always some small chance you will survive in any given situation, so there will be some multiverse timelines whose present is the same as your present, but in which you keep on living indefinitely. However, some forms of survival are a lot more likely than others; eg, it’s a lot more likely that my cryonically-preserved brain will be scanned and turned into an AI than that a copy of my brain will spontaneously appear out of nothingness. Thus, it makes sense to plan around the most likely sorts of scenarios, and not to bother doing much planning for the least likely ones.
But thinking /very/ long term, to the heat death of the universe… every form of negentropy is going to end up exhausted, with no more energy gradients that life and intelligence could use to survive from; meaning that however extended a life might be, there will be some point at which all of a person’s futures eventually fade away...
… or maybe not. Thermodynamic miracles—events violating ordinary statistics—will, on the long term, happen every so often… so might it be possible for some form of life in that era to rely on them as the last available source of negentropy? Which forms of TMs occur most often, that could most reliably be ‘fed’ from? How often do they occur, compared to the potential stability of patterns of matter-energy at this time-scale?
You’re assuming some sort of pattern theory of identity when you consider uploads a potential form of survival. If you go all-out pattern theory of identity and assume we’re in a big world, is there a reason why the subjectively subsequent moments of awareness need to actually take place at increasing time points on the universe’s timeline? A state of matter that corresponds to your pattern’s subjective t + 1 might have occurred at the universe’s t − 10000 at some distant light cone. If your mind stays at any finite size, it’ll eventually just end up going over the same states again, so you could just get an unbound subjective experience timeline inside a fixed timeslice of a spatially infinite, temporally finite universe.
If the ‘afterlife’ is infinite, then it will have infinitely more integral measure than the normal life.
Infinite as in “if you succeeded to make it into situation X, you are guaranteed to live forever” or merely potentially infinite, as in “for every situation X where you are alive, in some Everett branch will survive it” (in other words, you never run out of quantum immortality)? In the latter version, the integral of the ‘afterlife’ may still be smaller than the integral of ‘normal life’.
During a person’s ‘normal’ life the number of Everett branches containing that person approaches infinity. The way mortality currently works is that there’s a certain probability that you will die during each year, let’s say it’s 0.01 when you’re 20. That percent of Everett branches gets “eliminated” each year. This probability of dying increases each year, until it approaches 1 when you’re close to the age of 120. Let’s ignore life-extending technologies. In Copenhagen interpretation the probability that you’re alive after the age of 120 is effectively zero. In MWI there are few branches that survive beyond this, some of these for very long, potentially forever. So I agree with you, that the intergral of branches during a person’s normal life is probably greater than that of the smaller number of branches that survive almost forever. This is true even if the number of branches or the length of them is infinite, didn’t Cantor prove that there are different sized infinities?
Is this what you were after? I’m a bit confused. Tell me if I made any mistakes.
I don’t have bandwidth for a podcast just now; so ‘infinite’ in what direction? If the number of MWI timelines can be divided infinitely, then that seems like it would suffice, even if the universe is finite in many other ways.
Did he give any reasoning for that belief? Eg, does assuming non-infinitesimal worldlines improve the predictions of the interference of double-slit style experiments?
Again from what I recall: scientists have not found any evidence of infinities, math incompleteness problems go away without infinities, and computer physics models work even though computers have finite memories.
So let’s say you’re a soldier in battle in 2000 BCE. Someone just slashed your stomach open with a sword, you’re in horrible pain, your internal organs are spilling out, but you’re still conscious and aware of what’s happening. How are quantum immortality and the power of science going to work out for you now?
EDIT: I thought quantum immortality was thought as a thing that applies to everyone everywhere. Are we discussing some sort of more constrained version here that doesn’t apply to “your chest just got smashed by an engine block but you’re still conscious for a little while” but does apply to cryonics, uploading etc. information theoretic undeath shenanigans?
The answer is the most likely miracle, but I am not sure what exactly that would be. All necessary miracles are so improbably that I don’t trust my ability to evaluate their relative probabilities.
It could be something like: By random movement of atoms, your organs jump inside and your wounds heal (and your body overcomes the infection). All wittnesses stop fighting and start worshiping you as a god. You don’t understand the situation, but successfully use your new situation to stop the war or escape from the war. You collect smart people around you, supported by your followers’ donations, and together you invent science relatively slowly. It still takes a hundred years or more, that you miraculously survive with sufficient brain function. At the end your team develops a recursively self-improving AI (not necessarily a Friendly one, only one that wants to keep you alive).
Despite all the miracles, this seems like the least miraculous path from “cut with a sword” to “immortality”. (Assuming that the damage really happened, because otherwise the most likely path starts with “you wake up from the nightmate”.)
This is curiously detailed for something where basically the only requirement is that you stay aware of every moment, constant horrible pain and debilitating injuries aren’t any sort of problem unless they keep you from staying conscious, and there’s basically no lookahead beyond whatever the duration between consecutive states of subjective consciousness is, definitely something less than a second.
Sure, someone in the multiverse is going to get the happy shiny human-friendly thermodynamic miracle starting up for them, but it seems like there’d be countless quite a bit less improbable quivering masses of horrible injuries and pain who Just. Can’t. Die.
I mean, think of the lookahead. Sure, the miracle scenario has you having a lot bigger measure of existence after the miracle has taken place, but there doesn’t seem to be a point going directly forward from the lethal injury state where it’s more likely to go down the path of the miracle starting to happen than to just stay improbably aware in your current rapidly decaying state. You’d probably end up with some incredibly measure-sparse weird Boltzmann-brain-like states in the end, but isn’t it possible that at every step along the way there are a lot more pseudo-Boltzmann-brain futures than there are body-repairing thermodynamic miracle futures?
Quantum immortality is based on MWI, which is designed explicitly to match the standard “shut up and calculate” approach to QM, which means that it cannot have any measurable effects outside the standard framework, where “Everett branches” are known as “possible outcomes”. If you expect different consequences for your personal experience in the two pictures, you probably do not understand MWI.
So I can kill myself without worrying about some nasty existential horror shit, if needs be? Because that’s really all I wanted to know and LW seems like the only place that would take a query like this seriously
Does not follow. MWI is orthogonal to “some nasty existential horror shit”, it doesn’t provide evidence either for or against your worries.
I have no idea what do you worry about, but according to our current understanding in this life there is no detectable difference between a Copenhagen world and an Everett world. As to the afterlife, all bets are off—contemporary physics can’t help you there.
Trying to understand quantum physics on the basis of web comics doesn’t strike me as a useful. The lesson you should draw from that comic is that standing near a nuclear bomb when it explodes is a bad idea.
They neither know of night or day,
They night and day pour out their thunder.
As every Ingot rolls away,
A dozen more are split 'asunder.
There is a sign above the gate: Eleven days since a man lay dying,
Now every shift brings fear and hate, and shaken men in terror crying.
*
The molten rivers boil away a fiery brew Hell never equalled,
To their profits the bosses pray,
And Mammon sings in his grim cathedral:
His attendants join the choir,
and Heaven help us if we're shirking!
Stoke the furnace's altar fire and just be thankful that we're working!
*
To this, men, charge the hoppers high, 'lest you endure the foreman's choler!
To this, men, drain the tankards dry,
And let us toast the almighty Dollar;
It keeps us chained here before the fire,
Where heat and noise send the weak a quaking.
That the Siren's infernal cry the open heart sets the ground to shaping.
*
To this, men, raise the ladies high and make them shriek with love and laughter!
To this, men, kiss you woman's eyes,
and raise a song unto the rafters.
Wash the steel mill from your hair,
Beat the table 'till it's breaking.
Don't let terror enter there and in the hearth set the glasses breaking!
Would it be possible for a comment to have anchors that are Karma scored separately, so that someone making several points in the same comment can see which one are getting/losing Karma?
If the points are chained to make a coherent argument, that’s going to risk having the argument split up, whether you nest them as replies or put them sequentially.
While much ink has been spilled arguing for this approach to the study of political science, little attention has been paid to justifying and rationalizing the method. On the rare occasions that justification has been attempted, the results have been maddeningly vague. Why test predictions from a deductive, and thus truth-preserving, system? What can be learned from such a test? If a prediction is not confirmed, are assumptions already known to be false to blame? What precisely is the connection between a model and a theory? These questions are never addressed in a satisfactory way.
Some will no doubt argue that such justifications are fruitless and that we should just “get on” with the business of doing science. Philosophical discussions should affect political scientists no more than political scientists affect policy makers...
I’m having trouble knowing how well I understand a concept, while learning the concept. I tend to be good at making up consistent verbalizations of why something is, or how something works. However these verbalizations aren’t always accurate.
The first strategy against this trend is to simply do more problem sets with better feedback. I’m wondering if we can come up with a supplementary strategy where I can check if I really understand a concept or not.
I’m contemplating going to grad school for psychology.
I’d really like to focus on the psychology of religion, but there are other areas of psychology that I find interesting too (e.g. evolutionary psychology).
I don’t have a background in psychology; I took one intro course in undergrad to fulfill a requirement for my bachelor’s in IT. I do read a lot of pop-sci about psychology.
Think about how will you earn your living. Who will pay you money, for what, and how much? In particular, consider that under the assumption that you will NOT be able to get a tenured position in academia.
They asked everyone what blogs they wanted on the side panel when they redesigned the site. I don’t think the list has been changed after they put it up.
Thanks for the information. In that case, I hope in the future there is another opportunity to ask what blogs are featured on the side panel. I don’t know what anyone else is looking for, but as far as I’m concerned, I check these other rationality blogs as often as I check things posted directly to Less Wrong. I find Slate Star Codex, and Overcoming Bias, particularly interesting. Anyway, if other people gain similar such value from these other blogs, perhaps other blogs could be added in the future. I understand if each of us freely suggested what blogs we individually considered ‘rational’, there would be lots of noise, redundancy, and swamping the forum with poor suggestions. So, I may start a poll in the future asking which blogs the community as a whole would like to see added.
Any LW NYCers have a room available for <$1,000 per month that I (a friendly self-employed 23-year-old male) might be able to move into within a week or two? Or leads on a 1br/studio for <$1400? I could also go a bit above those prices if necessary.
PM me if so and I’ll send more details about myself. I’m also staying with some friends in NYC right now so we could meet up anytime.
What is your irrational reading guilty pleasure? Whenever I need a cheap laugh, I browse Conservapedia. Where do you go to indulge the occasional craving for high-octane idiocy?
Facebook announced graph search with great fanfare but if I want to know something simple like getting a list of my recently added friends I can’t just type it into the search bar but have to search in Google and find that I have to go through recent activity tab.
Similarly I have told facebook that I speak English and German through the facebook menu. It still shows me French and Rumanian posts of my friends that I can’t read. It doesn’t offer to translate them. A simple idea like showing me English posts that my French friends post but not showing the French posts just doesn’t seem to be implemented.
When using the facebook app I can’t easily view all events that happen today.
What do all those facebook engineers do, when they don’t seem to go from the low hanging fruits?
You have to remember that you are not the customer for Facebook… you are the product.
Giving you more control over your timeline and the posts you see is good for you, but makes Facebook’s ability to charge for access to you through “promoted posts” substantially less inviting.
On the other hand, something like graph search allows the opportunity to compete with Google and LinkedIn.
Just now I had the experience of having Facebook helpfully place directly on my newsfeed a post by somebody who is not on my friends list, who happens to trigger the hell out of me and who I actively avoid reading about on Facebook as much as possible. Thanks Facebook, great algorithms you’ve got there.
Based on discussion at the South Bay Area meetup tonight.
The five pillars of Islam are
Shahadah, confession of faith: Declaring that there is no god except God, and Muhammad is His prophet.
Salat, ritual prayer, five times a day.
Sawm, fasting during Ramadan—ideally, eating nothing between sunset and sunrise.
Zakat, giving alms.
Hajj, making a pilgrimage to Mecca.
By analogy, I propose five pillars of LessWrongIsm:
Confession of faith: “There is no God except the one we’re going to build, and the sage Yudkowsky is Its prophet.”
Polyphasic sleep, five times a day. An acceptable alternative is polyamorous sex, five times a day.
Diet. Any unusual diet will do—paleo, four spoons of sugar in the morning, strict vegan, whatever; but it must be strictly adhered to between sunrise and sunset.
Efficient altruism.
Moving to the Bay Area, or making pilgrimage to a CFAR workshop.
The topic “Is LW a cult?” has been discussed so much here and elsewhere that it is probably worth creating a LWiki page about it. Including the discussion of the term cult and when applying it constitutes a non-central fallacy.
Polyphasic sleep, five times a day. An acceptable alternative is polyamorous sex, five times a day.
As far as I know there Uberman with 6 times and Everyman with 4 I don’t know a shedule to which people successfully adapted with takes 5 times.
Diet. Any unusual diet will do—paleo, four spoons of sugar in the morning, strict vegan, whatever; but it must be strictly adhered to between sunrise and sunset.
I don’t think we care about the timeframe of runrise to sunset. It has to be the whole day but there might be one cheat day per week.
I don’t know, and unfortunately the author is dead so we can’t ask him.
That said, “hero worship” could mean a number of different things, not all of which might be symptomatic of a dangerous cult. Could you expand on what you mean by it?
Eliezer Yudkowsky is one of the most accomplished, knowledgeable, and stimulating writers I’ve ever encountered, and if he ever were to visit my house, I’d buy a freezer large enough to accommodate his head, just in case he choked on my boiled chickpeas. That being said, I think elevating him to Chuck Norris status is decidedly harmful to the propagation of our cause. He himself has advocated that we don’t worship Einstein, because it obscures the fact that he was just as human as we are, and discourages others from striving to achieve his level. Likewise, EY is no superhero, no demigod, no mythic savior, and it won’t do to treat him like one. This is why, as much as I admire the guy’s awesomeness, I’m against the existence of the “EY Facts” thread. I can’t explain rationality to others and keep a straight face while thinking that the author I’m citing is the Way, the Truth and the Life, the last hope and salvation of humanity. Leave it to history books to sing his praises, but for the time being, it will be the opposite of helpful.
I think the “EY facts” goes the other way. That’s not hero worship, that’s making a joke of hero worship.
“Chuck Norris status” is the opposite of hero-worship. Is there anybody who seriously believes that Chuck Norris is actually possessed of superhuman powers? Heck, is there anybody who even seriously believes he’s a uniquely talented actor?
The great-writer, chickpeas-and-freezer part is truly my opinion.
Telling the world that EY is a great writer etc. is fine. Telling the world that you believe him to be great enough that you’d buy a freezer large enough to accommodate his head, in case he died in your house, is much worse than self-mockery such as the EY facts page.
No offense, but I suggest that you stop trying to improve the reputation of LW/MIRI. If MIRI wants to improve their reputation and public relations they should hire a professional outsider who is neurotypical (I am neither, so maybe I am wrong about the impression your opinion gives).
Upon rereading my post after a full night’s sleep, I can see the problems with how I expressed it. I agree that it may have come off as too fanboyish, and we’re seeing the line between fanboyism and idolatry at different positions. Continued argument will only dig me deeper.
I think elevating him to Chuck Norris status is decidedly harmful to the propagation of our cause.
Oh, dear. Elevating EY to Chuck Norris status is hilarious and, I would argue, shows “our cause” in good light.
Maybe elevating EY to the divinely-inspired-prophet (PBUH) status would be harmful, but I haven’t seen anyone do that.
I can’t explain rationality to others and keep a straight face
I don’t see any need to keep a straight face. I don’t know if I am typical, but I don’t respond well to things explained to me with a terribly serious expression (well, as long as they don’t involve things like staunching bleeding from open wounds and such).
Eliezer Yudkowsky is one of the most accomplished, knowledgeable, and stimulating writers I’ve ever encountered, and if he ever were to visit my house, I’d buy a freezer large enough to accommodate his head, just in case he choked on my boiled chickpeas. That being said, I think elevating him to Chuck Norris status is decidedly harmful to the propagation of our cause.
Just one data point here. The EY facts post was funny and not at all cultish. Whereas your first sentence (and to a lesser extent the whole comment) made me cringe.
I want to thank this community for existing, all the people founding it, the people contributing to it and all the stuff linked here. I may not like all the topics or agree with all the opinions posted here. Nor may I find use of most of the stuff I read around here. But at least I don’t feel so alone anymore.
That is all, thank you.
When I hit discussion, it keeps automatically redirecting me to the ‘top posts’ even when I click back onto ‘new’. Is anyone else getting this?
Happens in Safari, not in Firefox.
I’m seeing the same problem in Chrome.
Unfortunately, I use an iPad :(
Previous discussion in the last open thread: http://lesswrong.com/lw/jv4/open_thread_1117_march_2014/apgw
Yes, this happens to me in Windows, but not Ubuntu (both Chrome).
Am experiencing this with chrome on my phone, but did not notice earlier on my PC, even though that also uses chrome.
Yes, very annoying.
I used Chrome and it happened, when I use Firefox it doesn’t.
Yep.
How well do medical doctors fare in terms of health outcomes compared to people of similar social economic status and family history? Is there a difference between research doctors and practising doctors? What about nurses, is there a notable difference too?
This question is posted within the context of “how big is the effect of medical knowledge on personal health?” and the assumption that medical doctors should represent the upper end of the spectrum. Other medical professionals should represent data points in between. All this together should hint at the personal use of medical knowledge in some kind of unit.
This study seems to go quite a ways towards answering your question:
-- Frank, Erica, Holly Biola, and Carol A. Burnett. “Mortality rates and causes among US physicians.” American journal of preventive medicine 19.3 (2000): 155-159.
You may also find this worth checking into:
-- Aasland, Olaf G., et al. “Mortality among Norwegian doctors 1960-2000.” BMC public health 11.1 (2011): 173.
EDIT: I added a second study and cleaned up the citations.
This is spot on! And a great starting point for further research. Thank you.
Gah! I had been deceived, thanks for clearing that up.
Somewhat related, I remember reading an article claiming that Doctors are more likely to opt out of life-prolonging treatment. Not really well-cited, but seemed like an interesting claim. That end-of-life hospital care is so bad that they would choose not to do it.
Link
Sorry for nitpicking, but don’t you mean ‘doctors are more likely to opt out’?
Yup, that’s what I meant. Fixed. Thanks.
Is there a name for the situation where the same piece of evidence is seen as obviously supporting their side by both sides of an argument?
eg: New statistics are published showing ethic group X is committing crimes at 10 times the rate of ethic group Y.
To one side, this is obvious evidence that ethic group X are criminals.
To another side, this is obvious evidence the justice system is biased.
Both sides are totally opposed, yet see the same fact as proving they are right.
If redheads are 10 times more likely to be in jail for violent crimes, it is evidence for both “redheads are violent” and “judges hate redheads”—and both might be true!
And “redheads are violent” and “judges hate redheads” are not totally opposed, they only look that way in a context where they are taken as arguments in support of broader ideologies who, them, are totally opposed (or rather, compete with each other so oppose each other).
More generally, many facts can be interpreted different ways, and if one interpretation is more favorable to one ideological side, that side will use that interpretation as an argument. Seen like that, facts looking like they support “opposite sides” seems almost inevitable.
Why the Bombings Mean That We Must Support My Politics
Confirmation bias.
Also more specifically attitude polarization:
Apologies for the nitpick, but didn’t you mean ethnic group?
Everyone knows utilitarians are more likely to break rules.
(This is mostly a joke based on the misspelling. I know a sophisticated utilitarianism would consider the effect of widespread lawbreaking and not necessarily break laws so much as to be overrepresented in prison)
I don’t know if you intended your disclaimer to be funny, but I found it funnier than the original joke.
I did actually mean ethnic group, but now I see my typo I’m actually quite liking it this way as it’s less likely to trigger real-world connotations.
You know what they say: one man’s Modus Ponens is another man’s Modus Tollens
That’s only once you reformulate grandfather’s scenario as “If the justice system is unbiased, racism is justified.”. It surprises me that father would cut grandfather’s class along its joints… can mstevens think up examples of his class not covered by father, or nonexamples covered by father?
I have sometimes seen arguments that fit this pattern, including on Less Wrong —
It seems to me that something is deficient or abusive about many arguments of this form in the general case, but I’m not sure that it’s always wrong. What are some examples of legitimate arguments of this form?
(A point of clarification: The “meta-level theory or ideology” part is important. That should match propositions such as “consequentialism is true and deontology is false” or “natural-rights theory doesn’t usefully explain why we shouldn’t hurt others”. It should not match propositions such as “other people don’t really suffer when I punch them in the head” or “injury to wiggins has no moral significance”.)
One mistake is overestimating the probability that the other person will act on their ideology.
People compartmentalize. For example, in theory, religious people should kill me for being unbeliever, but in real life I don’t expect this from my neighbors. They will find an excuse not to act according to logical consequences of their faith; and most likely they will not even realize they did this.
(And it’s probably safest if I stop trying to teach them to decompartmentalize. Ethics first, effectiveness can wait. I don’t really need semi-rational Bible maximizers in my universe.)
I don’t think the problem with such argument are so much that they are wrong on a factual basis, but they prevent the discussion of some important ideas.
A feminist can argue that ze can measure with a implicit bias test how biased people are and that the argument that you are making is going to make the average reader more biased. Ze might be completely right, but that doesn’t mean that your argument is wrong on a factual level.
Once you move to political consideration that certain things are not allowed to be said because they support harmful memes, you are in danger of getting mind killed and be left with a world view that doesn’t allow you to make good predictions about reality.
The missing airplane story seems like an opportunity for prediction on par with the Amanda Knox trial.
My heart sinks whenever I see these sorts of discussions on LessWrong.
My heart sinks whenever I see these sorts of discussions on LessWrong. Could we analyze sometime we expect to be verifiable?
Making verifiable predictions about a missing airplane doesn’t seem all that difficult to me; what am I missing? For instance, is there something wrong with this one?
What happened to slatestarcodex and does anyone know if it’s just temporary or something to be concerned about?
My hosting company got annoyed because something was taking up too many resources. I did what the nice person on the telephone suggested (installed some WordPress plugins, uninstalled others) and it’s back online now. If the problem recurs I might have to restrict commenting for a while until I can figure out a more permanent solution, but for now everything’s fine.
I checked the site, and got a 403. Is that what you’re talking about? When did you first notice it? The latest Google cache is from Mar 18, 2014 17:37:21 GMT.
http://webcache.googleusercontent.com/search?q=cache:http://slatestarcodex.com/#
etc.
Any tips on bailing out of an argument if you want to very nearly concede the whole thing without quite saying your opponent is right?
eg if you realise the whole conversation was a terrible mistake and you’re totally unequipped to have the conversation, but still think you’re right.
Should you just admit they’re right for simplicity even if you’re not quite convinced?
I state the truth: “I tend to get too attached to my opinion in live debates and want to think about your arguments in peace.”
The people that get offended by this tend to be not the kind of people I want to associate with anyway.
“Good point. I’ll think about that when I have the chance.”
“I’m not really convinced by your argument, but I need to learn more about this issue before I can speak coherently on it”
I think our conversation raised a lot of interesting points, I think all the interesting stuff has been said. How about we switch topics?
That works as a neutral “let’s move on”. I sort of want a feeling of conceding more (but not totally) though.
How about “I understand the points you’re making, let me think more about them”..?
The concept of heroic responsibility seems to be off-putting for some people, mostly because it looks like it puts the blame of every single bad thing at the feed of an individual. Generally, I’ve answered this objection by telling them that they don’t need to look that broadly and that they can apply the concept at a smaller, everyday scale. So instead of worrying about solving depression forever, you can worry about making sure a friend gets the psychological help they need and not telling yourself things like: “It’s their parent’s/partner’s/doctor’s responsibility that they get proper help.”
Is this a correct way to explain the concept or am I strongly misrepresenting it?
Maybe it’s not a problem with explaining the concept per se, it’s just that its consequences are unpleasant. Feels like you are telling people that heroic responsibility is one of the possible choices, a one that they didn’t make, but could have made, and perhaps even should have made. -- There probably are good reasons why most people don’t take heroic responsibility, but these are difficult to explain. So it’s easier to pretend that the whole concept does not make sense to you.
Also, it’s not my responsibility to understand the concept of heroic responsibility. :D
EDIT: It may be related to the status-regulation emotion that apparently some people feel very strongly, and some people don’t even know. The problem with “heroic responsibility” might simply be the emotional reaction of: “Who do you think you are that you even consider taking more responsibility than other people around you?! That is a task worth of a king; and you obviously aren’t one. And you try to explain it to me, but I am also not a king; I don’t even pretend to be, so… this whole stuff doesn’t make any sense. You must be insane.”
It seems most people don’t feel good about being considered personally responsible for all the bad things in the world. Especially people who already suffer from anxiety of some kind.
But it’s a worthwhile thing to know about, even in everyday life. I work at a homeless shelter at the moment, and I’ve occasionally gone out of my way to help people because I knew about heroic responsibility. Even if I’m not tackling homelessness as a general problem, it has still helped me become a better me.
This seems completely incompatible with the actual concept, but certainly more palatable.
The problem with the concept as a whole is that it imposes an impossible requirement → I will be maximally guilty whatever I do → why even bother doing anything. Humans (with rare exceptions) just aren’t built such that heroic responsibility works for them. If I’m only responsible for close relatives and friends plus some limited charity, I can actually fulfill my responsibilities so there’s a reason to try, and so unheroic responsibility is a better model to live by unless you want to impress LWers.
The way I would put it: Doing the right thing is hard. It doesn’t mean one should give up without trying. Also, something can be done better even if it’s not done perfectly.
In what contexts do you try to convince other people of heroic responsibility? Why do you want to frame it that way?
I think the concept comes on LW from HPMOR. Specifically from:
Most people reject that kind of responsibility. It’s no accident that the person who wrote those lines is on a quest to safe the world.
It also in some sense quite telling that the character who speaks those lines is a child who hasn’t learned the rules about what is and what isn’t his business. Taking responsibility for the life on another is invasive and if you look at the story than Hermoine isn’t that happy that Harry tries to be the prime hero.
Half a year ago I sat in a café and had a conversation about personal development. As “collateral damage” the words I spoke brought up a deep personal issue in a stranger next to me and the person sort of angrily started an interaction with me. I went into a direct 10 minute NLP intervention.
Hopefully it helped but I didn’t gave the person afterwards my contact details to prolong the interaction with him but just told him to seek help elsewhere. That particular interaction burned my out and it was okay because the other person started it.
If I’m however sitting in public transportation and the person sitting next to me is crying after ending a telephone conversation I don’t take it as my responsibility to fix the issue.
There are days were I might do a bit on a nonverbal level but I wouldn’t impose myself into the situation by speaking words. I could try to play the role of a hero, but I often chose against it and that’s fine.
When it comes to psychological help for friends, then I think it’s good to offer it. It’s good to explain to someone which choices are available to them. On the other hand everybody has a right to feel bad and if a friend wants to feel bad and/or not be helped by myself, then it’s not my business to break through and make him feel well.
That also means that if people around you don’t want to take responsibility, don’t force it on them. It often much better to lead by example. Tell others stories about how you feel great because you made a difficult decision to practice heroic responsibility. Bonus points for picking stories that are not out of reach for your audience ;)
I’m not exactly trying to convince them, just trying to explain the concept. It’s something that occasionally comes up when you mention Less Wrong somewhere on the internet. “Less Wrong, aren’t those the guys that think you are personally responsible for children dying in Africa?”
This looks like good advice. Thank you.
In those cases, it’s useful to explain the advantages of that mindset. Knowing you saved a child in Africa from dying through malaria feels really great. It makes you feel powerful and makes you feel agentship.
Happiness research shows that giving to other people often makes you more happy than buying possessions for yourself.
To what extent has this been shown when you will never meet or hear directly from the recipients of your gift?
Um. The one time I donated to a charity (as a child), I immediately felt terrible guilt. My family was poor at the time, and I realized my parents might have needed those $300 of saved-up allowance. When I save money, I reduce the risk that I will be a burden to those close to me, and that’s really fucking valuable.
Laser eye surgery (LASIK) is being suggested by several people on LessWrong, who suggest it is a costly procedure that has high likelihood to improve your life. I do not think this is a good trade-off across a life time, because presbyopia.
Almost all humans experience presbyopia. This is age-related deterioration in the ability of the eye to adjust focus. In history, the biggest effect for most people is reduced ability to read, but now it also is affecting the ability to use computers.
If you have myopia (short sight), you can not see distant objects without distance glasses. However, myopia means that you will retain the ability to focus at close distance for longer as presbyopia develops.
So LASIK surgery is not only trading money for better distance vision: if it works, you get better distance vision but also you get worse close vision as presbyopia develops.
How long will you live in each condition?
According to last year’s survey, the mean age is 27.4 (stdev 8.5). Presbyopia usually develops from age 40. Let us disregard the possibility of uploading because the mean date for the singularity is 2150. Life expectancy for a 27-year-old US male is 50 years. This is likely an underestimate. It is a period life expectancy, not a cohort life expectancy, and we do expect future improvements in longevity. It is, too, a population average. LessWrongers are smarter and better educated than average, which is associated with longer life. We will nonetheless use it for illustration.
So, very roughly, the trade for an average LessWronger considering LASIK is reduced need for glasses for 13 years against increased need for glasses for 37 years, or longer. Also, I guess that most LessWrongers with myopia spend much more time reading and using computers than doing tasks that require distance vision. I also guess they value those activities more.
Therefore this does not seem a good trade to me even if it were free of cost and risk.
Have I made a mistake? Has anyone more data? I know old people with presbyopia and I know young people with LASIK but I do not know anyone who has both. I guess some people on LessWrong have LASIK, but very few or no people have presbyopia, so I expect no people here with both. I guess that in universities there are faculty members who have both so I will ask on my next visits. Sometimes it seems even that all faculty members have myopia!
The need for glasses is a binary variable—either you need them or you don’t.
Someone with presbyopia always needs glasses because he can’t focus both near and far. It doesn’t matter whether that person started with good eyesight, or with myopia, or had myopia corrected by Lasik—he will need glasses.
You seem to think that presbyopia “corrects” myopia, that is not so. In geometric terms myopia “translates”, shifts your zone of focus closer to you so that it doesn’t include infinity any more. But presbyopia narrows your zone of focus, contracts it. You don’t get far vision back by overlaying myopia with presbyopia.
Sorry, I was not clear. I do not think that presbyopia corrects myopia. It even makes it worse at distance. But at close myopia can offset the effect of presbyopia.
As presbyopia narrows your zone of focus, you can not focus as close as previously. If you have myopia, you can focus at much closer distances than people with normal vision. Before presbyopia, this is not much use. When presbyopia develops, you have more close vision to spare, and can still read a book or a computer screen when people with normal vision would need reading glasses.
I will give a simplified example. A person has mild myopia, and needs a correction to focus at infinity, of strength −2.50 dioptres. They have successful LASIK surgery which gives them normal vision. They develop severe presbyopia, and require a correction of +2.50 dioptres to be able to focus at a comfortable reading distance. They now need reading glasses. If they had not had LASIK surgery they would need distance glasses but would not need reading glasses. They can gain the correction of +2.50 dioptres by taking off their distance glasses.
This is why I say, “[with successful LASIK] you get better distance vision but also you get worse close vision as presbyopia develops.”. What is more important may vary from person to person.
I have spoken to an optician about this, and she was mostly confirming this. This is only N=1 argument from authority though! She agreed of course that people with myopia and presbyopia can get good close vision by taking off their distance glasses. They do not need reading glasses, unless they wear contact lenses or have had LASIK. However, I must say that she did not not think that this would be a reason for most young people to not have LASIK. She said would certainly not advise LASIK for people with presbyopia or who would likely have it soon. She said also that people who wear contact lenses for myopia who get presbyopia, she will suggest under-correction in one eye, so that they have good distance vision in one eye and good close vision in the other.
Need for glasses is not a binary variable. It has more states than that. It must at least include ‘need distance glasses’ and ‘need reading glasses’.
Please keep us updated on your findings and decisions.
I’m also looking at a cost-benefit of LASIK and watching the reports on early adopters. High chance of an improved quality of life versus a small chance of significantly reduced quality of life. So this is how risk aversion feels like from the inside.
I have spoken to an optician as in reply to other message in this thread.
I have also asked at a computer science department in a European university. It was funny! Everybody in the department except some grad students had myopia. Many of the older faculty also had myopia. But nobody had LASIK. Sorry I did not count properly. My visit was for other reason. However, I can say the modal mode of vision correction was glasses for distance vision and taken off for reading or computers. There were also some people with varifocal glasses. There were also some people with contact lenses for distance vision and reading glasses for reading or computers.
[LINK] Sleep loss can cause brain damage (permanently lost neurons at least in mice). Even if the study is only about mice it nonetheless provides references to more general results:
http://www.uphs.upenn.edu/news/News_Releases/2014/03/veasey/
Scott Aaronson reviews Max Tegmark’s book on the Mathematical Universe hypothesis. Tegmark responds in the comments, with an interesting and still ongoing back-and-forth.
I have a friend with Crohn’s Disease, who often struggles with the motivation to even figure out how to improve his diet in order to prevent relapse. I suggested he should find a consistent way to not have to worry about diet, such as prepared meals, a snack plan, meal replacements (Soylent is out soon!), or dietary supplement.
As usual, I’m pinging the rationalists to see if there happens to be a medically inclined recommendation lurking about. Soylent seems promising, and doesn’t seem the sort of thing that he and his doctor would have even discussed. My appraisal of his doctor consulations seem to be something along the lines of “You should track your diet according to these guidelines, and try to see what causes relapse” rather than “Here’s a cure all solution not entirely endorsed by the FDA that will solve all of your motivational and health problems in one fell swoop.” For my friend, drilling into sweeping diet changes and tracking seems like an insurmountable challenge, especially with the depression caused by simply having the disease.
I’d like to be able to purchase something for him that would let him go about his life without having to worry about it so much. Any ideas on whether Soylent could be the solution, in particular as to its potential for Crohn’s?
Consider helminthic therapy. Hookworm infection down-regulates bowel inflammation and my parasitology professor thinks it is a very promising approach. NPR has a reasonably good popularization. Depending on the species chosen, one treatment can control symptoms for up to 5 years at a time. It is commercially available despite lack of regulatory approval. Not quite a magic bullet, but an active area of research with good preliminary results.
There is no known “cure all solution not entirely endorsed by the FDA that will solve all of your motivational and health problems in one fell swoop.” A lot of people with Crohn’s seem to get some benefit from changing their diet. But the conclusions they draw always seem to contradict each other and in general the improvements are temporary. What it looks like to me (and at least on person with her own experience of the problem) is that radically changing your diet every few years is what you need to do.
Has anyone evr tried writing rationalist fiction of The Sandman? It’s a world that explicitly runs on storytelling patterns, but surely something can be done that illustrates the merits of rational thought even in such a setting. Rationalistis should win and adapt to the circumstances, even if they are a dreamscape.
Neil Gaiman himself, sort of. In his take on the Marvel Universe, the genius scientist character has figured out the world runs on storytelling logic instead of mechanical science and that he won’t ever be able to permanently change his friend turned into a superhuman rock monster back into a human since “guy permanently turned into superhuman rock monster” makes a better story element than “guy who was a superhuman rock monster but is all better now”.
For the current fic writers, I don’t see it working. Rationalist fiction writers range from middling to terrible in skill compared to traditional fiction writers at the top of their game like Gaiman, and trying to do this would basically be trying to one-up Gaiman in his own game at his home field. Not to mention that his world-building is a lot more self-aware already than the perennial nerd culture favorite soft targets like Star Wars or Harry Potter, like the Marvel 1602 example shows, so you wouldn’t have the nice obvious stuff to work with.
@First bit; that’s Discworld-grade brilliant, but does the knowledge spread, as it does in Discworld where everyone is Genre Savvy, or is Reed Richards still Useless? Of course, the problem with narrativium is that attempting to take advantage of it is likely to bite you back, but is it better than not knowing?
My very first attempt at writing, waaay back in 2007, was a story about a guy who was thurst into a Narrativium-based world, which was kind of like an immense, live Let’s Play for the entertainment a bunch of True Fae children (about three centuries old). Gran’t Morrison’s Action Comics come to mind. He’d be thurst into different genres asd different roles, and he’d have to start thinking vey meta, very fast, if he wanted to survive each story. He also wanted to get the damned show cancelled without giving a bad performance that would give the Showrunner a reason to fire him (literally), leading to many Springtime For Hitler moments, much to his frustration (sort of like Hideo Kojima and Metal Gear, Hideaki Anno and Evangelion, … but he also starts to take pride in his work (think Walter White and his blue meth)… At some point, he’d become aware that the way things made “storytelling sense” rather than “logical sense” also extended to the world outside the game. And then we’d have an Animal Man meets Grant Morrison type of meeting.
Most of the references I’m making here are retroactive; I had the idea much before I came in contact with them, but they’re handy for condensing. Another work I’d compare this to would be GAINAX’s Abenoashi Mahou Shoutengai. In fact, now that I think of it, it might be better to start the story as an Ontological Mystery, instead of having his slavery revealed to him right away as I initially thought.
Anyway, my point here is to say that, almost as soon as I wrote the first chapter, I went on hiatus, because I was keenly aware that I was biting much, much more than I could chew. Even now, I don’t think I have remotely what it takes to plan out and pull off such a project. Designing locomotives is such easy work in comparison; all you need is patience and method.
It’s extremely Post-Modern and illogical in every sense. It is awesome.
What have been the most useful LW meetup activities in which you participate till now?
Joining the Habit RPG party.
While the local meetup every month has been amazing every time I went, it didn’t have that much impact on my everyday life. Joining Habit RPG definitely has.
Last Friday I joined the tinychat study hall for the first time, and I recommend it very much.
Priming can nudge one’s thoughts in certain directions; fashion can nudge others’.
It’s easy enough to try priming abstract, rational, far thinking with cool blue colours, Mozart, and by surrounding oneself with books… but is there any data on scents that nudge peoples’ modes of thinking in similar directions? Failing that, is there anecdata?
I wouldn’t use the word rational in that place. Taking action instead of spending a lot of energy in mental analysis is often rational.
I would guess that the smell of a library with a lot of books might be able to have an effect for people who spent a lot of time in libraries.
In some cases, yes; but people tend to be willing to take action all on their own quite often anyway. I’m hoping to find something that nudges in the direction of far-mode thinking for those times when it /is/ appropriate to stop and think.
A lot of people on LW have akrasia problems. Having thing in their environment that bring them more into their heads wouldn’t be beneficial.
I would profit more from an environment that primes me to take action than from one that makes me think more about what I’m doing.
Be aware of the tradeoff you are making.
Can you please add an open_thread tag.
Crapshot: Say I have some kind of data per country and I want to use Python or other FOSS tools to plot this on a good looking map at the country level. Is there a good tutorial for this? I ask because I can do virtually anything else with Python like data manipulation and analysis or plots, so I’d be nice to do this with Python too.
I know it’s not FOSS or Python, but Google docs has exactly this feature built in to its spreadsheet application.
Looks like this might be like what you want.
R is free & open source, and widely used for stats, data manipulation, analysis and plots. You can get geographical boundary data from GADM in RData format, and use R packages such as sp to produce charts easily.
Or at least, as easily as you can do anything in R. I hesitate to suggest it to people who already do data work in Python (it’s less … clean) but in this sort of domain it can do many things easily that are much harder or less commonly done in Python. My impression is the really whizzy, clever stats/graphics stuff is still all about R. (See e.g. this geographic example.) There are many tutorials, some of them very good in parts, but it’s famously slippery to get to grips with.
More on spatial data in R. You can also get a long way with the maps and mapdata packages.
I know about R. In fact I switched from R to Python because R is less … clean. It looks like I will have to use R for plotting though the rest of the stack will be in Python.
Those maps look gorgeus!
You might want to look at basemap for matplotlib.
Disclaimer: I haven’t used this, (though I might start), but skimming over the synopses it looks like it will do what you want it to.
Thanks for the reply. Basemap looks like what I want but it is not. It is surprisingly easy to plot arbitrary data on a world map but I didn’t manage to e.g. colour the countries of the world by some metric. If you look into basemap, please keep me posted.
P.S.: There is some tutorial that shows how to colour Italy’s regions seperately but I did not manage to colour in the whole world.
What does Everett Immortality look like in the long term?
The general idea of EI is that there is always some small chance you will survive in any given situation, so there will be some multiverse timelines whose present is the same as your present, but in which you keep on living indefinitely. However, some forms of survival are a lot more likely than others; eg, it’s a lot more likely that my cryonically-preserved brain will be scanned and turned into an AI than that a copy of my brain will spontaneously appear out of nothingness. Thus, it makes sense to plan around the most likely sorts of scenarios, and not to bother doing much planning for the least likely ones.
But thinking /very/ long term, to the heat death of the universe… every form of negentropy is going to end up exhausted, with no more energy gradients that life and intelligence could use to survive from; meaning that however extended a life might be, there will be some point at which all of a person’s futures eventually fade away...
… or maybe not. Thermodynamic miracles—events violating ordinary statistics—will, on the long term, happen every so often… so might it be possible for some form of life in that era to rely on them as the last available source of negentropy? Which forms of TMs occur most often, that could most reliably be ‘fed’ from? How often do they occur, compared to the potential stability of patterns of matter-energy at this time-scale?
You’re assuming some sort of pattern theory of identity when you consider uploads a potential form of survival. If you go all-out pattern theory of identity and assume we’re in a big world, is there a reason why the subjectively subsequent moments of awareness need to actually take place at increasing time points on the universe’s timeline? A state of matter that corresponds to your pattern’s subjective t + 1 might have occurred at the universe’s t − 10000 at some distant light cone. If your mind stays at any finite size, it’ll eventually just end up going over the same states again, so you could just get an unbound subjective experience timeline inside a fixed timeslice of a spatially infinite, temporally finite universe.
.
Infinite as in “if you succeeded to make it into situation X, you are guaranteed to live forever” or merely potentially infinite, as in “for every situation X where you are alive, in some Everett branch will survive it” (in other words, you never run out of quantum immortality)? In the latter version, the integral of the ‘afterlife’ may still be smaller than the integral of ‘normal life’.
Good point.
During a person’s ‘normal’ life the number of Everett branches containing that person approaches infinity. The way mortality currently works is that there’s a certain probability that you will die during each year, let’s say it’s 0.01 when you’re 20. That percent of Everett branches gets “eliminated” each year. This probability of dying increases each year, until it approaches 1 when you’re close to the age of 120. Let’s ignore life-extending technologies. In Copenhagen interpretation the probability that you’re alive after the age of 120 is effectively zero. In MWI there are few branches that survive beyond this, some of these for very long, potentially forever. So I agree with you, that the intergral of branches during a person’s normal life is probably greater than that of the smaller number of branches that survive almost forever. This is true even if the number of branches or the length of them is infinite, didn’t Cantor prove that there are different sized infinities?
Is this what you were after? I’m a bit confused. Tell me if I made any mistakes.
As Max Tegmark mentioned on this Rationally Speaking podcast quantum immortality might only work if the universe is infinite.
I don’t have bandwidth for a podcast just now; so ‘infinite’ in what direction? If the number of MWI timelines can be divided infinitely, then that seems like it would suffice, even if the universe is finite in many other ways.
As I recall, he doesn’t believe the universe is infinite in any direction.
Did he give any reasoning for that belief? Eg, does assuming non-infinitesimal worldlines improve the predictions of the interference of double-slit style experiments?
Certainly not the latter.
If there were any perceptible grain to them, we’d be about a picosecond from the abrupt end of the universe-as-we-know-it.
Again from what I recall: scientists have not found any evidence of infinities, math incompleteness problems go away without infinities, and computer physics models work even though computers have finite memories.
Quantum immortality is a poor atheist’s immortal soul.
That’s the opposite of comforting.
How so? Don’t people find it comforting believing that there are universes where they survive against impossible odds?
Mere survival doesn’t sound all that great. Surviving in a way that is comforting is a very small target in the general space of survival.
Beats dying if you believe that some day you will be saved BY THE POWER OF SCIENCE!
So let’s say you’re a soldier in battle in 2000 BCE. Someone just slashed your stomach open with a sword, you’re in horrible pain, your internal organs are spilling out, but you’re still conscious and aware of what’s happening. How are quantum immortality and the power of science going to work out for you now?
EDIT: I thought quantum immortality was thought as a thing that applies to everyone everywhere. Are we discussing some sort of more constrained version here that doesn’t apply to “your chest just got smashed by an engine block but you’re still conscious for a little while” but does apply to cryonics, uploading etc. information theoretic undeath shenanigans?
The answer is the most likely miracle, but I am not sure what exactly that would be. All necessary miracles are so improbably that I don’t trust my ability to evaluate their relative probabilities.
It could be something like: By random movement of atoms, your organs jump inside and your wounds heal (and your body overcomes the infection). All wittnesses stop fighting and start worshiping you as a god. You don’t understand the situation, but successfully use your new situation to stop the war or escape from the war. You collect smart people around you, supported by your followers’ donations, and together you invent science relatively slowly. It still takes a hundred years or more, that you miraculously survive with sufficient brain function. At the end your team develops a recursively self-improving AI (not necessarily a Friendly one, only one that wants to keep you alive).
Despite all the miracles, this seems like the least miraculous path from “cut with a sword” to “immortality”. (Assuming that the damage really happened, because otherwise the most likely path starts with “you wake up from the nightmate”.)
This is curiously detailed for something where basically the only requirement is that you stay aware of every moment, constant horrible pain and debilitating injuries aren’t any sort of problem unless they keep you from staying conscious, and there’s basically no lookahead beyond whatever the duration between consecutive states of subjective consciousness is, definitely something less than a second.
Sure, someone in the multiverse is going to get the happy shiny human-friendly thermodynamic miracle starting up for them, but it seems like there’d be countless quite a bit less improbable quivering masses of horrible injuries and pain who Just. Can’t. Die.
I mean, think of the lookahead. Sure, the miracle scenario has you having a lot bigger measure of existence after the miracle has taken place, but there doesn’t seem to be a point going directly forward from the lethal injury state where it’s more likely to go down the path of the miracle starting to happen than to just stay improbably aware in your current rapidly decaying state. You’d probably end up with some incredibly measure-sparse weird Boltzmann-brain-like states in the end, but isn’t it possible that at every step along the way there are a lot more pseudo-Boltzmann-brain futures than there are body-repairing thermodynamic miracle futures?
.
What claim?
.
I find it counterproductive to assign probability or truth value to untestables.
.
If your decisions depend on untestables, you need a better decision theory.
.
Quantum immortality is based on MWI, which is designed explicitly to match the standard “shut up and calculate” approach to QM, which means that it cannot have any measurable effects outside the standard framework, where “Everett branches” are known as “possible outcomes”. If you expect different consequences for your personal experience in the two pictures, you probably do not understand MWI.
.
Blanking your comments before retracting them? To hide changing your mind after learning stuff?
No, now that I got a clear picture of this issue I will delete this account among other things. Sorry for bothering you.
I don’t think removing the content from your comments is a good way to react to changing your mind, if that is your reason.
What might these consequences be?
.
I don’t think that’s how MWI works.
.
Does not follow. MWI is orthogonal to “some nasty existential horror shit”, it doesn’t provide evidence either for or against your worries.
.
I have no idea what do you worry about, but according to our current understanding in this life there is no detectable difference between a Copenhagen world and an Everett world. As to the afterlife, all bets are off—contemporary physics can’t help you there.
.
Trying to understand quantum physics on the basis of web comics doesn’t strike me as a useful. The lesson you should draw from that comic is that standing near a nuclear bomb when it explodes is a bad idea.
Whatever happens to you after you die.
A Puddler’s Tale
I expect the future of human emulated minds to be interestingly similar.
Would it be possible for a comment to have anchors that are Karma scored separately, so that someone making several points in the same comment can see which one are getting/losing Karma?
Just make multiple comments.
If the points are chained to make a coherent argument, that’s going to risk having the argument split up, whether you nest them as replies or put them sequentially.
Amusing final sentence from Clarke & Primo (2005):
I’m having trouble knowing how well I understand a concept, while learning the concept. I tend to be good at making up consistent verbalizations of why something is, or how something works. However these verbalizations aren’t always accurate.
The first strategy against this trend is to simply do more problem sets with better feedback. I’m wondering if we can come up with a supplementary strategy where I can check if I really understand a concept or not.
What does that phrase mean?
Selective manipulation of learning through the application of a mild electrical current to the brain.
Reference is pay walled
Here
I’m contemplating going to grad school for psychology.
I’d really like to focus on the psychology of religion, but there are other areas of psychology that I find interesting too (e.g. evolutionary psychology).
I don’t have a background in psychology; I took one intro course in undergrad to fulfill a requirement for my bachelor’s in IT. I do read a lot of pop-sci about psychology.
Anyone have any advice for me going forward?
Think about how will you earn your living. Who will pay you money, for what, and how much? In particular, consider that under the assumption that you will NOT be able to get a tenured position in academia.
Yes… I’ve been reading up on all of the horror stories of adjunct professors...
What’s the process for selecting what ‘rationality blogs’ are featured in the sidebar? Is it selected by the administrators of the site?
I’m surprised some blogs of other users with lots of promoted posts here aren’t featured as rationality blogs.
They asked everyone what blogs they wanted on the side panel when they redesigned the site. I don’t think the list has been changed after they put it up.
Thanks for the information. In that case, I hope in the future there is another opportunity to ask what blogs are featured on the side panel. I don’t know what anyone else is looking for, but as far as I’m concerned, I check these other rationality blogs as often as I check things posted directly to Less Wrong. I find Slate Star Codex, and Overcoming Bias, particularly interesting. Anyway, if other people gain similar such value from these other blogs, perhaps other blogs could be added in the future. I understand if each of us freely suggested what blogs we individually considered ‘rational’, there would be lots of noise, redundancy, and swamping the forum with poor suggestions. So, I may start a poll in the future asking which blogs the community as a whole would like to see added.
Long shot again:
Any LW NYCers have a room available for <$1,000 per month that I (a friendly self-employed 23-year-old male) might be able to move into within a week or two? Or leads on a 1br/studio for <$1400? I could also go a bit above those prices if necessary.
PM me if so and I’ll send more details about myself. I’m also staying with some friends in NYC right now so we could meet up anytime.
Have you considered posting to the NYC LW mailing list? I don’t think most of them are regularly here these days
Thanks, I was going to take your advice, but I got lucky and found a nice place yesterday.
What is your irrational reading guilty pleasure? Whenever I need a cheap laugh, I browse Conservapedia. Where do you go to indulge the occasional craving for high-octane idiocy?
I frequent reddit, that is bad enough.
RationalWiki.
Russian LiveJournal. With the whole Crimea business going on, the shitstorm there is really powerful as of now...
4chan
Any feelings of your mind being killed while doing so?
No, it would take more than that. Google “Golden Age of Gaia” if you’d like some serious brain death.
Edited: http://www.whale.to/ and http://www.naturalnews.com/ are their own brand of terrible.
This is like saying, what sort of shit do you look for when you want to smell something really horrible?
Facebook announced graph search with great fanfare but if I want to know something simple like getting a list of my recently added friends I can’t just type it into the search bar but have to search in Google and find that I have to go through recent activity tab.
Similarly I have told facebook that I speak English and German through the facebook menu. It still shows me French and Rumanian posts of my friends that I can’t read. It doesn’t offer to translate them. A simple idea like showing me English posts that my French friends post but not showing the French posts just doesn’t seem to be implemented.
When using the facebook app I can’t easily view all events that happen today.
What do all those facebook engineers do, when they don’t seem to go from the low hanging fruits?
You have to remember that you are not the customer for Facebook… you are the product.
Giving you more control over your timeline and the posts you see is good for you, but makes Facebook’s ability to charge for access to you through “promoted posts” substantially less inviting.
On the other hand, something like graph search allows the opportunity to compete with Google and LinkedIn.
Just now I had the experience of having Facebook helpfully place directly on my newsfeed a post by somebody who is not on my friends list, who happens to trigger the hell out of me and who I actively avoid reading about on Facebook as much as possible. Thanks Facebook, great algorithms you’ve got there.
Based on discussion at the South Bay Area meetup tonight.
The five pillars of Islam are
Shahadah, confession of faith: Declaring that there is no god except God, and Muhammad is His prophet.
Salat, ritual prayer, five times a day.
Sawm, fasting during Ramadan—ideally, eating nothing between sunset and sunrise.
Zakat, giving alms.
Hajj, making a pilgrimage to Mecca.
By analogy, I propose five pillars of LessWrongIsm:
Confession of faith: “There is no God except the one we’re going to build, and the sage Yudkowsky is Its prophet.”
Polyphasic sleep, five times a day. An acceptable alternative is polyamorous sex, five times a day.
Diet. Any unusual diet will do—paleo, four spoons of sugar in the morning, strict vegan, whatever; but it must be strictly adhered to between sunrise and sunset.
Efficient altruism.
Moving to the Bay Area, or making pilgrimage to a CFAR workshop.
The topic “Is LW a cult?” has been discussed so much here and elsewhere that it is probably worth creating a LWiki page about it. Including the discussion of the term cult and when applying it constitutes a non-central fallacy.
Can you mix and match? I don’t think I could keep up with either of those by themselves.
As far as I know there Uberman with 6 times and Everyman with 4 I don’t know a shedule to which people successfully adapted with takes 5 times.
I don’t think we care about the timeframe of runrise to sunset. It has to be the whole day but there might be one cheat day per week.
I would like to gently suggest that you may have missed the way my tongue was poking into my cheek.
And this is why people mistake us for a cult.
I believe RolfAndreassen is being humorous.
I get he is. But Poe’s Law works both ways: there’s no self-parody that some clueless outsider won’t mistake for real lunacy.
That’s a good thing—I would much prefer that somebody that clueless just shake his head and continue on his merry way.
True, we don’t want to attract that particular person. But the misinformation he/she’s going to spread may discourage many potential desirables.
I’d say it’s worth it to have some humor and somewhat self-deprecating fun here.
It’s not only worth it, it is sorely needed. Taking yourself too seriously is a debilitating disease that can be fatal.
One of the signs of a cult is “grimness” — “disapproval concerning jokes about the group, its doctrines or its leader(s).”
How come that list doesn’t mention hero worship?
I don’t know, and unfortunately the author is dead so we can’t ask him.
That said, “hero worship” could mean a number of different things, not all of which might be symptomatic of a dangerous cult. Could you expand on what you mean by it?
Eliezer Yudkowsky is one of the most accomplished, knowledgeable, and stimulating writers I’ve ever encountered, and if he ever were to visit my house, I’d buy a freezer large enough to accommodate his head, just in case he choked on my boiled chickpeas. That being said, I think elevating him to Chuck Norris status is decidedly harmful to the propagation of our cause. He himself has advocated that we don’t worship Einstein, because it obscures the fact that he was just as human as we are, and discourages others from striving to achieve his level. Likewise, EY is no superhero, no demigod, no mythic savior, and it won’t do to treat him like one. This is why, as much as I admire the guy’s awesomeness, I’m against the existence of the “EY Facts” thread. I can’t explain rationality to others and keep a straight face while thinking that the author I’m citing is the Way, the Truth and the Life, the last hope and salvation of humanity. Leave it to history books to sing his praises, but for the time being, it will be the opposite of helpful.
I think the “EY facts” goes the other way. That’s not hero worship, that’s making a joke of hero worship.
“Chuck Norris status” is the opposite of hero-worship. Is there anybody who seriously believes that Chuck Norris is actually possessed of superhuman powers? Heck, is there anybody who even seriously believes he’s a uniquely talented actor?
I’m having difficulty parsing which parts of this comment are intended to be “within quotes” as an example of hero worship ….
The great-writer, chickpeas-and-freezer part is truly my opinion.
Telling the world that EY is a great writer etc. is fine. Telling the world that you believe him to be great enough that you’d buy a freezer large enough to accommodate his head, in case he died in your house, is much worse than self-mockery such as the EY facts page.
No offense, but I suggest that you stop trying to improve the reputation of LW/MIRI. If MIRI wants to improve their reputation and public relations they should hire a professional outsider who is neurotypical (I am neither, so maybe I am wrong about the impression your opinion gives).
Upon rereading my post after a full night’s sleep, I can see the problems with how I expressed it. I agree that it may have come off as too fanboyish, and we’re seeing the line between fanboyism and idolatry at different positions. Continued argument will only dig me deeper.
Oh, dear. Elevating EY to Chuck Norris status is hilarious and, I would argue, shows “our cause” in good light.
Maybe elevating EY to the divinely-inspired-prophet (PBUH) status would be harmful, but I haven’t seen anyone do that.
I don’t see any need to keep a straight face. I don’t know if I am typical, but I don’t respond well to things explained to me with a terribly serious expression (well, as long as they don’t involve things like staunching bleeding from open wounds and such).
Just one data point here. The EY facts post was funny and not at all cultish. Whereas your first sentence (and to a lesser extent the whole comment) made me cringe.
They’re attempting it, but it isn’t sufficiently amusing for the tradeoff to be worth it