Open Thread, June 16-30, 2013
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
- 25 Jun 2013 8:49 UTC; 3 points) 's comment on [LINK] Mr. Money Mustache on Back of the Napkin Calculations and Financial Planning by (
Genes take charge and diets fall by the wayside.
You need a New York Times account to read it, but setting one up only takes a couple of minutes. Here are some exerpts in any case.
Obese people almost always regain weight after weight loss:
Thin people who are forced to gain weight find it easy to lose it again:
The body’s metabolism changes with weight loss and weight gain:
Genes and weight:
On the other hand, here’s a study that shows a very strong link between impulse control and weight. I’m not really sure what to believe anymore.
The impulse control they use is a facet of Conscientiousness; and we already know Conscientiousness is highly heritable...
Yes, but it is still potentially useful to know how much of the heritability is metabolically vs. behaviorally manifested.
Also more generally, we should be careful about mixing different levels of causation.
Unless I’m missing something, they don’t describe the size of the effects of personality that they found, just the strength of the correlations.
I’m not too clear on how to interpret hierarchical model coefficients, but they do give at least one description of effect size, on pg6:
and pg8:
Thanks. Those differences are small compared to common differences of BMI, though.
Well, yeah, you should’ve expected that from the small correlations.
I don’t have much knowledge of statistics. You may have forgotten what that’s like.
In principle, something (e.g. how much the mother eats during the pregnancy) might affect both those things, with no causal pathway from one down to the other.
Moderately surprising corollary: so society IS treating fat people in a horribly unjust manner after all. Those boring SJW types who have been going on and on about “fat-shaming” and “thin privilege”… are yet again more morally correct on average than the general public.
Am now mildly ashamed of some previous thoughts and/or attitudes.
What are we to make of the supposedly increasing obesity rate across Western nations? Is this a failure of measurement (e.g. standards for what count as “obesity” are dropping), has the Western diet changed our genetics, or something else altogether?
If it was mainly genetics, then I would think that the obesity rate would remain constant throughout time.
Environmental changes over time may have shifted the entire distribution of people’s weights upwards without affecting the distribution’s variance. This would reconcile an environmentally-driven obesity rate increase with the NYT’s report that 70% of the variance is genetic.
The obvious cross comparison would be to look at populations distributions of weight and see if they share the same pattern shifted left or right based on the primary food source.
Hypothesis possibly reconciling link between impulse control and weight, strong heritability of both, resistance to experimental intervention, and society scale shifts in weight:
Body weight is largely determined by the ‘set point’ to which the body’s metabolism returns, hence resistance to intervention. This set point can be influenced through lifestyle, hence link to impulse control and changes across time/cultures. However this influence can only be exerted either a) during development and/or b) over longer time scales than are generally used in experiments.
This should be easy enough to test. Are there any relevant data on e.g. people raised in non-obesity ridden cultures and then introduced to one? Or on interventions with obese adolescents?l
I dunno, ask the OP. I was merely pointing out that in the event that obesity has a more or less significant hereditary/genetic component, the social stigma against it must be an even more horrible and cruel thing than most enlightened people would admit today.
(Consider, for example, just the fact that our attractiveness criteria appear to be almost entirely a “social construct”—otherwise it’d be hard to explain the enormity of variance; AFAIK the only human universal is a preference for facial symmetry in either gender. If society could just make certain traits that people are stuck with regardless of their will, and cannot really affect, fall within the norms of “beauty” in a generation or two… then all the “social justice”/”body positivity”/etc campaigns to do so might have a big potential leverage on many people’s mental health and happiness. So it must be in fact reasonable and ethical of activists to “police” everyday language for fat-shaming/body-negativity, devote resources and effort to press for better representation in media, etc.
Yet again I’m struck by just how rational—in intention and planning, at least—some odd-seeming “activist” stuff comes across as on close examination.)
A possible hypothesis is that the genes encode your set point weight given optimal nutrition, but if you don’t get adequate nutrition during childhood you don’t attain it. IIRC something similar is believed to apply to intelligence and height and explain the Flynn effect and the fact that young generations are taller than older ones.
Flynn effect?
Sure. Fixed. Thanks.
I’ve moved away slightly from SJW attitudes on various matters, since starting to read LW, Yvain’s blog and various other things, however, I’ve actually moved closer to SJW attitudes to weight, since researching the issue. The fact that weight loss attempts hardly ever work in the long run, is what has changed my views the most.
[OT: just noting that one could be “away from SJW attitudes” in different directions, some of them mutually exclusive. For example, on some particular things (racial discrimination, etc) I take the Marxist view that activism can’t help the roots of the problem which are endemic to the functioning of capitalism—except that I don’t believe it’s possible or sane to try and replace global capitalism with something better anytime soon, either… so there might be no hope of reaching “endgame” for some struggles until post-scarcity. Although activists should probably at least try and defend the progress made on them to-date from being optimized away by hostile political agendas.]
Actually, I still suspect that the benefiits in increased happiness and mental health would still be better than the marginal efficiency of pressuring lots of people to try and lose weight even if it depended in large part on personal behaviour. And social pressure is notoriously indiscriminate, so any undesirable messages would still hit people who can’t or don’t really need to change.
Plus there are still all the socioeconomic factors outside people’s control, etc.
Whether or not this result is correct, society is definitely shaming the wrong people: some perfectly healthy people (e.g. young women) are shamed for not being as skinny as the models on TV, and not much is being done to prevent morbid obesity in certain people (esp. middle-aged and older) who don’t even try to lose weight.
(Edited to replace “adult men” with “middle-aged and older” and “eat less” with “lose weight”.)
Yeah, and so it looks more and more that (as terribly impolite it might be to suggest in some circles on the Internet) we need much higher standards of “political correctness” and a way stronger “call-out culture” in some areas.
Most activists are neither saints nor superhumanly rational, of course—but at least in certain matters the general public might need to get out of their way and comply with “cultural engineering” projects, where those genuinely appear to be vital low-hanging fruit obscured by public denial and conformism.
A social justice style which includes recruiting imperfect allies rather than attacking them.
I’m pretty sure that call out culture needs some work. It’s sort of feasible when there’s agreement about what’s privileged and what isn’t, but I’d respect it more if there were peace between transgendered people and feminists.
From a place of general agreement with you, looking for thoughts on how to go forward:
Are second-wave feminists more transphobic than a random member of the population? Or do you think second-wave hypocrisy is evidence that the whole second-wave argument is flawed?
Because as skeptical as I often am of third-wave as actually practiced, they are particularly good (compared to society as a whole) on transgendered folks, right?
I don’t think the problem is especially about transphobia, I think it’s about a harsh style of enforcing whatever changes people from that subculture want to make. They want to believe—and try to enforce—that the harshness shouldn’t matter, but it does.
This may offer some clues about a way forward.
IME “call out culture” feminists are very anti-transphobia. Second wave feminists aren’t so interested in getting people to check their privilege.
If that’s true, then I don’t understand NancyLebovitz’s criticism of “call out culture” or the relevance of her statement to Multiheaded’s point.
I think that “calling out” types can be extremely harsh and unpleasant—I agree with NancyLebovitz there. However, I don’t get what she meant by the problems between feminist and trans people leading her to respect it less.
I mean that call out culture presents itself as an optimal way for people with different levels of privilege to live with each other, and I think that intractable problem between second wave feminists and transpeople is evidence that there are problems with call out culture, even if .what second wave feminists have been doing is technically before the era of call out culture.
There used to be a really good analysis of the problems with call out culture at ozyfrantz.com, but that blog is no longer available.
I see. Personally, I’m struggling with the proper application of the Tone Argument. In archetypal form:
A: I don’t like social expression X (e.g. scorn at transgendered).
B: You might have a point, but I’m turn off by your tone.
A: I don’t think my tone is your true rejection.
But in practice, this can devolve into:
B: Social expression X isn’t so bad / might be justified.
A: B deserves to be fired / assaulted / murdered. (e.g. a mindkilled response)
B: Overreacting much?
which is clearly problematic on A’s part. Separating the not-true-rejection error by B from the mindkilled problem of A is very important. But the worry is that focusing our attention on that question diverts from the substantive issue of describing what social expressions are problematic and identifying them when they occur (to try to reduce their frequency in the future).
The fact that second wave feminists exercised cisgender privilege to be hurtful to the transgendered seems totally distinction from “Tone Argument” dynamic.
http://web.archive.org/web/20130412201542/http://ozyfrantz.com/2012/12/29/certain-propositions-concerning-callout-culture-part-one/
Thanks very much.
I wanted all three of the major articles, but that was easy enough to find from your link.
http://web.archive.org/web/20130412200333/http://ozyfrantz.com/category/callout-culture/
I’m pretty sure “trying to eat less” is exactly the wrong thing to do. Calorie restriction just triggers the starvation response which makes things worse in the long run.
Change what you eat, not how much.
Physics is still relevant. The only way to lose weight (outside of surgery) is to spend more energy than you take in. The problem, of course, is that your energy intake and your energy output are functions of each other plus a lot of other things besides (including what’s on your mind).
I still think that for most people (aka with an uninformative prior) the advice of “Eat less, move more” is a good starting point. Given more data, adjust as needed.
It’s not that unusual for people to regain what they lost plus more after a failed diet.
I’m pretty sure “Force feeding yourself as much fat as you can keep down with the aid of anti-emetics, taking glucose intravenously while injecting insulin, estrogen and testosterone and taking a β2 antagonist” is closer to “exactly the wrong thing to do”.
I’ve replaced “eat less” with “lose weight” because I don’t want to go into this, but see Lumifer’s reply.
And here is the kind of attitude that, in my eyes, justifies all the anger and backlash against fat-shaming. Oh damn, I feel like I understand the SJW people more and more every time I see crap like this.
http://staffanspersonalityblog.wordpress.com/2013/05/30/the-ugly-truth-about-obesity/
The “harsh truth” is that people suffering from obesity need to be protected from such vile treatment somehow, and that need is not recognized at the moment. Society shouldn’t just let some entitled well-off jerks with a fetish for authoritarianism influence attitudes and policy that directly affect vulnerable groups.
...
Goddamn reactionaries everywhere.
I found that quite hard to read. Even if poor impulse control were the sole cause of obesity, there would be no reason to attack the obese so nastily, instead of, for instance, suggesting ways that they might improve their impulse control. I find the way he relishes attacking them incredibly unpleasant.
In fact, the internet has quite a lot to say about improving impulse control.
I reckon there’s special pleading going on with the obese. Way more anger & snottiness gets directed at them (at least on the parts of the Internet I see) than at, say, smokers, even though smoking is at least as bad in every relevant way I can think of.
(Here’re some obvious examples. At an individual level, smoking is associated with shorter life at least as much as obesity. At a global level, smoking kills more and reduces DALYs far more than high BMI. Like obesity, smoking is associated with lower IQ & lower conscientiousness. And so on.)
Hint hint: it matters less to some people whether the group they are trying to subjugate is delineated by economic class, race, gender, sexuality or body issues… as long as they get to impose their hegemony and see the “deviants” suffer. It’s scary to see such a desire to dominate, control and punish.
(Related: check out the pingback on that post.)
Perhaps we should dominate, control and punish those evil people who use the available Bayesian evidence when dealing with individuals.
I also predict that a lot of those evil people will be white, male, and wealthy, so we should focus on members of those groups.
It’s not scary if the good people are doing it, right? And, of course, by “good” I mean members of our tribe.
Not nearly all such people are outright sadistic and power-hungry, but those who are can spin complex ideological rationalizations that push the “overton window” and allow the “good” bourgeois to be complicit with a cruel and unjust system.
See e.g. the “Reagan revolution” in America and the myth of the “welfare queen” that’s a 3-for-1 package of racism, classism and sexism. I’ve read a bit about how it has been fuelling a “fuck you, got mine” attitude in poor people one step above the underclass; the system hasn’t actually been kind to a white/male/lower-middle-class stratum, but it has given them someone to feel superior to. It’s very similar to how the ideologues of the Confederacy explicitly advocated giving poor white men supreme rule over their household as a means of racial solidarity across the class divide.
False equivalence. Of course, any movement can degrade into an authoritarian-populist, four-legs-good-two-legs-bad version, given a vicious political atmosphere and polarized underlying worldviews, but… it happens to dominant/conservative ideologies, too! The dominant group just doesn’t notice the resulting violence and victimization because from its privileged position it can afford an illusion of social peace.
If we agree that it’s a danger of political processes in general rather than of specific movements, could we stop sneaking in implicit arguments that a particular ideology is safe from viciousness and indiscriminatory aggression?
See also: social dominance theory.
(More on SDT)
People of all kinds of political opinions are able to use myths to support their opinions. People of all kinds of political opinions can be power-hungry. People of all kinds of political opinions can declare other people evil and use hate against them for their own political advantage.
Can we agree on this, or can you tell me an example of a major political movement that does not do that? (Because you provided some specific examples, and I am too lazy to counter that with specific examples in the other direction, unless that really is necessary. I suppose we could just skip this part and agree that it is not necessary.)
???
Is your assumption that any effort to limit cruelty will necessarily be cruel? Or that SJs in particular are especially untrustworthy? Something else?
My assumption is that SJs are good at finding faults of everyone else, and completely blind to their own. (Which is actually my assumption for all political movements.) I don’t consider SJs more untrustworthy that any other group of mindkilled people explaining why they are the good guys and their enemies are the bad guys.
Their amateur psychoanalysis lacks self-reflection. Those other people, they want to dominate, control and punish. That would obviously never happen to us! Now let me explain again why everyone who disagrees with us is evil and must be stopped...
I think you’re right in general, but I don’t think “protected from” is a good way to frame it, as though fat people are the passive recipients of attacks, and some stronger force has to come in to save them. (I’m not sure quite what you meant, or even if you were just angry about a bad situation and used the first phrase that came to mind.)
The world would be a much better place if the attacks stopped. I’m not sure what the best strategies are to get people to stop seeing fatness and thinness as moral issues. The long slow grind of bring the subject up again and again with whatever mix of facts and anger seem appropriate seems to be finally getting some traction.
Absolutely. I just meant to say that there’s a need for intersectionality and solidarity in such struggles, i.e. even people who aren’t from marginalized groups that are directly targeted by shit-stains like Mr. Staffan here should still call such shit-stains out on their shit.
Am I missing a connection between your post and coffespoons’ that makes your a response to his?
http://aeon.co/magazine/health/david-berreby-obesity-era/
(EDIT: Found another article about that here: http://marginalrevolution.com/marginalrevolution/2013/08/the-animals-are-also-getting-fat.html)
The study referenced appears to be from here: http://rspb.royalsocietypublishing.org/content/278/1712/1626.short
Here is one theory on an environmental cause of obesity: https://en.wikipedia.org/wiki/Obesogen
Here is a study that suggests Jet fuel causes obesity. And it’s an epigenetic effect: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3587983/
Another interesting link I’d like to save here: http://slatestarcodex.com/2015/08/04/contra-hallquist-on-scientific-rationality/
EDIT: More links. Haven’t gone through them thoroughly yet though. Putting this stuff here more for future reference:
https://www.reddit.com/r/science/comments/3xyu5r/fat_but_fit_may_be_a_myth_researchers_say_the/cy9b52l
https://www.reddit.com/r/science/comments/3xyu5r/fat_but_fit_may_be_a_myth_researchers_say_the/cy91nu5
Looking for any relevant research or articles on the causes of obesity, or effectiveness of interventions.
another link to dump for now: https://www.reddit.com/r/science/comments/4zupkq/new_study_finds_that_the_bmi_of_adopted_children/
Adipocyte count is essential to maintaining weight.
It is unclear to what extent weight is genetic rather than environmentally set at a later stage in development.
I am unable to find whether fat cell count can be changed over this 8 year time scale, though my biochemistry professor was inclined to that hypothesis.
Heredity and weight:
The long-term weight loss cited in this review used a 1-2 year followup, during which time only <16% of adipocytes could have turned over.
People such as the author of The Hacker’s Diet, who lost a sizeable fraction of his weight as an adult and then stayed there for decades, seem to me to suggest that it can.
ADBOC. I don’t know that seeming like someone who is starving, so long as you aren’t actually risking to die from starvation and your micronutrient intake stays adequate, is a bad thing, and indeed the evidence seems to suggest that it isn’t.
And yeah, slowed metabolism means that if you go straight back to eating as much as you did before starting the diet, you’ll gain back the weight. Which is why people are usually advised not to do that.
Controlling for height and sex?
I think the problem is that maintaining a state on semi-starvation for the rest of one’s life is very unpleasant and difficult, and is achieved by very few people:
Well, the identical twin parts of the study would automatically control for height and sex :)
What?
I meant, it doesn’t surprise me at all that if you pick a bunch of pairs of twins, the correlation between “x’s weight” and “x’s twin’s weight” would be very large—but if you only picked pairs of male twins between 1.77 m and 1.80 m tall and you got the same result...
Link to twin study. A quick scan (I don’t have time to read it in full right now, but I will later) suggests they used twins of the same sex, and they also compared BMI not weight, which controls for height.
Lesswrongers are surprised by this? It appears figuring out metabolism and nutrition is harder than I thought.
I believe that obesity is a problem of metabolic regulation, not overeating, and this result seems to support my belief. Restricting calories to regulate your weight is akin to opening the fridge door to regulate its temperature. It might work for a while but in the long run you’ll end up breaking both your fridge and your budget. Far better to figure out how to adjust the termostat.
Some of the things that upregulate your fat set point are a history of starvation (that’s why calorie restriction is bad in the long run), toxins in your food, sugars (especially fructose—that stuff is toxic) and grains. Wheat is particularly bad—it can serioysly screw with your gut and is addictive to boot.
There’s been more recent work suggesting that planets are extremely common. Most recently, evidence for planets in unexpected orbits around red dwarfs have been found. See e.g. here. This is in addition to other work suggesting that even when restricted to sun-like stars, planets are not just common, but planets are frequently in the habitable zone. Source(pdf). It seems at this point that any aspect of the Great Filter that is from planet formation must be declared to be completely negligible. Is this analysis accurate?
Is there a wiki or website that keeps track of things related to the Great Filter?
I guess I’m looking for something that enumerates all the possible major filters, and keeps track of data and arguments pertaining to various aspects of these filters.
I’m not aware of any such thing. It would be nice to have. There was an earlier Boston meetup a few years ago where a few of us tried to brainstorm future filters but we didn’t really get anything that wasn’t already known (I think jimrandomh mentioned that there’s been similar attempts at other meetups and the like). The set of proposed filters in the past though is large. I’ve seen almost every major step in the evolution of life being labeled as a filter, and there’s sometimes reference class tennis issues with them, especially when connected to developments that aren’t as obviously necessary for intelligent life.
I have a notion that the proportion of sociopaths is a filter as the tech level goes up—spam is a problem, though more of a dead-weight loss than a disaster. If we get to the point of home build-a-virus kits, it might be a civilization-stopper. Was this on the list?
I think that was included, the point that as tech goes up many heavily devastating weapons become substantially easier for individuals to make/possess was discussed. We’re also already seeing that now. The amount of damage a single person with a gun could do has gone up over time, and we now have 3D printed guns. So do it yourself viruses looks like part of a general trend.
I am not an astrophysicist, so not an authoritative voice here, but yes, almost every star is likely to contain a bunch of planets, some probably in an inhabitable zone. Even our close neighbors, the three Centauri stars and Vega have planets around them, or at least an asteroid belt hinting at planets. So, at least a couple of terms in the Drake equation are very close to unity.
I second this question. Are we now completely certain of this rarity?
To whoever fixed it so that we can see the parents of comments when looking at a user’s comments, major props to you for being awesome.
I dislike the change, as it’s harder to get an impression about a new user based on their user page now, the comments by other users are getting in the way, and it’s not possible to tune them out. Also, the change has broken user RSS feeds.
Would it be a good solution to change the color of the parent comments to gray, so they would be easier to ignore?
Your props go to Lucas Sloan. Hail Lucas!
I’m a little torn on that one—on one hand it adds convenience most of the time, but it makes it less convenient to check on recent karma. The latter is something I feel like doing now and then, but it’s possible I’m saner if it isn’t convenient.
It was never an ideal way to check on recent karma, though it was better than nothing. I’d quite like something similar to Stack Overflow’s Reputation view.
Yeah, this has been requested before.
I am particularly aware of it right now because I’ve been watching my 30-days karma drop slowly and steadily for the last couple of days, but I have no idea what in particular people want less of.
That said, I suspect that’s just because I’m getting individual downvotes across a wide set of comments in the 0+/-2 range, and changing the way that information is displayed won’t really help me answer that question any better than the current system does.
30-day karma falling is probably old posts aging out. Unless your total karma is also falling?
Not sure about my total. I notice the 30-day shifts when I reload, but not the total changes (I assume that’s because the relative changes are larger, but it might be because it’s lower, or lighter-colored, or for some other reason; I’m not sure.)
My reply to wedrifid is relevant here as well.
have you been making less comments? My 30 day karma depends far more on the amount of commenting I’ve been doing than on anything else.
Less not doing whatever you did exactly 30 days ago! (You could look this up if interested. If there in fact aren’t any comments falling off the 30 day list then you may have cause for concern. Or, well, cause for mild interest anyway.)
I believe the timing-based updates only happen once a day, so when I see drops over the course of a day I assume the cause is downvotes, rather than change of reading frame. Admittedly, I’m not sure why I believe that, now that I think of it.
Also, it’s really more “less not doing whatever I did over a 30-day period ending yesterday”, not “less not doing whatever I did exactly 30 days ago,” right? Which is harder to look up.
Though you’re right, regardless, that if I were sufficiently interested I could extract this information. And yeah, “concern” is pushing it, but I do try to use downvotes (as distinct from fewer-upvotes-than-I-received-earlier) as information about what I’m doing that the community wants less of. “Mild interest” is about right.
No. The rest of the period hasn’t changed. Any period related changes to the 30 day karma are the result of the stuff 30 days ago falling out of scope. The remainder influences the absolute level of 30dk but not the fluctuations.
(nods) You’re right, of course. I sit corrected.
I second Nancy and Vladimir in disliking it.
Maybe it could be made a user preference, the way it is for the Recent Comments page.
I would definitely like to see it as a user preference.
Also, how do people feel about the relatively subtle color change for new comments which has replaced the bright green edge?
I liked the green edge more.
I’m finding the new version usable—the green edge might allow for faster scanning, but the new version isn’t bad with a little practice.
On the other hand, if there are multiple new comments in a thread, I find that I miss the alternating white/pale blue way of distinguishing comments. It would be nice to have two pastels instead of one.
The visual difference between a new comment and an old comment should be greater than the difference between two old comments.
How about using two pastel colors for the old comments… and using the white background for the new comments?
It would also be nice to have e.g. a small green “NEW” text in the corner of new comments, so I can quickly find a few new comments in a long discussion by using the “Find Next” functionality of my browser. (Because I don’t have a functionality to search a comment based on its background color.)
If you’re going to do this at all, it should be a more-likely-unique string (eg ”!new!” rather than “NEW”).
I much prefer the new version. It’s far easier to spot new comments.
I just found out that there exists an earlier term for semantic stopsigns: a thought-terminating cliché.
Has anyone written a worthwhile utilitarian argument against transhumanism? I’m interested in criticism, but most of it is infested with metaphysical and metaethical claims I can’t countenance.
What proposition are you looking for an argument against?
Transhumanism can mean a lot of things: the transcending of various heretofore human limits, conditions, or behaviors—which are many and different from one another.
And for those things, you might refer to the proposition that they are possible, or likely, or inevitable; (un)desirable or neutral; ethically (in)permissible or obligatory; and so on.
I’m looking for utilitarian arguments against the desirability of changing human nature by direct engineering. Basically, I’m wondering if there’s any utilitarian case for the “it’s fundamentally wrong to play God” position in bioethics. (I’m being vague in order to maximize my chance of encountering something.)
A while back, I made the argument that the ability to remove fundamental human limits will eventually lead to the loss of everything we value.
How long have you been this pessimistic about the erasure of human value?
Not sure. I’ve been pessimistic about the Singularity for several years, but the general argument for human value being doomed-with-a-very-high-probability only really clicked sometime late last year.
This seems to assume a Hansonesque competitive future, rather than an FAI singleton, is that right?
Pretty much.
Please be more specific and define “changes to human nature”.
We already make many deliberate changes to people. We raise them in a culture, educate them, train them, fit them into jobs and social roles, make social norms and expectations into second nature for most people, make them strongly believe many things without evidence, indoctrinate them into cults and religions and causes, make them do almost anything we like.
We also make medical interventions that change human nature, which is to die of diseases easily treated today. We restore sight to the myopic and hard of hearing, and lately even to the blind and deaf. We even transplant complex organs.
We have changed the experience of human life out of all recognition with the ancestral state, and we have grown used to it.
Where does the line between human and transhuman lie? We can talk about any specific proposed change, and some will be bad and some will be good. But any argument that says all changes are inherently bad might also say that all the changes that already occurred have been bad as well.
I was an intern at MIRI recently and I would like to start a new LW meetup in my city but as I am still new on LW, I do not have enough karma points. Could you please upvote this comment so that I can get enough karma to post about a meetup? lukeprog suggested I do this. I only need 2 points to post in the discussion part. Thanks to you all
Confirmed.
Thanks for bringing back the bright-colored edges for new comments.
The additional thing I’d like to see along those lines is bright color for “continue this thread” and “expand comments” if they include new comments. I’d also like to see it for “comment score below threshold”, but I can understand if that isn’t included for social engineering reasons.
Risks of vegetarianism and veganism
Personal account of physical and emotional problems encountered by the author which were reversed when he went back to eating animal products. Much discussion of vitamins and dietary fats, not to mention genetic variation. Leaves the possibility open that some people thrive on a vegetarian diet, and possibly on a vegan diet.
I just realized that willingness to update seems very cultish from outside. Literally.
I mean—if someone joins a cult, what is the most obvious thing that happens to them? They update heavily; towards the group teachings. This is how you can tell that something wrong is happening.
We try to update on reasonable evidence. For example we would update on a scientific article more than on a random website. However, from outside is seems similar to willingness to update on your favorite (in-group) sources, and unwillingness to update on other (out-group) sources. Just like a Jehovah Witness would update on the Watch Tower, but would remain skeptical towards Mormon literature. As if the science itself is your cult… except that it’s not really the science as we know it, because most scientist behave outside the laboratory just like everyone else; and you are trying to do something else.
Okay, I guess this is nothing new for a LW reader. I just realized now, on the emotional level, how willingness to update, considered a virtue on LW, may look horrifying to an average person. And how willingness to update on trustworthy evidence more than on untrustworthy evidence, probably seems like hypocrisy, like a rationalization for preferring your in-group ideas to out-group ideas.
So does that make stubbornness a kind of epistemic self defence?
Almost surely, yes. If other people keep telling you crazy things, not updating is a smart choice. Not the smartest one, but it is a simple strategy that anyone can use, cheaply (because we can’t always afford verification).
Perhaps, but moving the local optimum from Politics-is-the-Mindkiller towards a higher sanity line seems to require dropping this defensive mechanism (on the societal level, at least).
First one must be able to tell the difference between reliable and unreliable sources of information. Only then it is safe to drop the defensive mechanism.
Just dropping the defensive mechanism could lead to whatever… for example massive religious zealotry. Or, more probably, some kind of political zealotry.
Unfortunately, one cannot simple revert stupidity. If a creationist refused to update to evolution, that’s bad. But if they update to scientology, that’s even worse. So before people start updating in masses, they better understand the difference.
For what it’s worth, the complaints I’ve heard about LW center around arrogance, not excessive compliance.
you could restate the arrogance as an expectation that others update when you say things
Likewise, especially of people talking about fields they are not experts in.
I agree that this is a common failure mode locally on LW, but even if this community did not have this problem, Villiam_Bur’s point would still have a lot of explanatory power on why raising-the-sanity-line methods are resisted by society at large.
On the other hand, once you’re in one it’s the not-updating that gives it away.
I worry that this is a case of finding a ‘secret virtue’ in one’s vices: I think we’re often tempted to pick some outstandingly bad feature of ourselves or an organization we belong to and explain it as the necessary consequence of a necessary and good feature.
My reason for thinking that this is going on here is that another explanation seems much more plausible. For one thing, you’d think the effect of seeing someone heavily update would depend on knowing them before and after. But how many people who think of LW this way think so because they knew someone before and after they became swayed by LW’s ideas?
With Nancy, I think that the PR problem LW has isn’t the impression people have that LWers converts of a certain kind. Rather, I think what negative impression there is is the result of an extremely fixable problem of presentation: some of the most prominent and popular ways of expressing core LW ideas come in the form of 1) ‘litanies’, or pseudo-asian mysticism. These are good ideas being given a completely unnecessary and, for many, off-putting gilding. No one here takes the religious overtone seriously, but outsiders don’t know that. 2) they come in the form of explicit expressions of contempt for outsiders, such as ‘raising the sanity waterline’, etc.
I admit that I honestly do consider many people insane; and I always did. Even the smarter ones seem paralyzed by some harmful memes. I mean, people argue about words that have no connection with reality, while in other parts of the world children are dying from hunger. Hoaxes of every kind circulate by e-mail, and it’s hard to find someone I know personally who didn’t send me them repeatedly (after being repeatedly explained that it was a hoax, and given a pointer to some sites collecting hoaxes). Smart people start speaking in slogans, when difficult problems need to be solved, and seem unable to understand where is the problem with this kind of communication. People doing bullshit that obviously doesn’t and can’t work, and insisting that you have to do it harder and spend more money, instead of just trying something else for a while and observing what happens. So much stupidity, so much waste. -- And the few people who know better, or at least are able to know better, are often afraid to admit it even to themselves, because the idea that we live in insane society is scary. So even they don’t resist the madness; at best they don’t join it, but they pretend they don’t see it. This is how I saw the world decades before I found LW.
And yes, it is a bad PR. It is impolite towards the insane people, who may feel offended, and then try to punish us. But even worse, it is a bad strategy towards the sane people, who are not yet emotionally ready to admit that the rest of the world is not sane. Because it goes against our tribal instincts. We must agree with the tribe, whether it is right or wrong; especially when it is wrong. If you are able to resist this pressure, it’s probably not caused by higher rationality, but by lower social skills.
So how exactly should we communicate the inconvenient truths. Because we are trying to communicate truthfully, aren’t we? Should we post the information openly, and have a bad PR? Should we have a secret forum for forbidden thoughts, and appear cultish, and risk that someone exposes the information? Should we communicate certain thoughts only in person, never online? Seems to me that “bad PR” is the least wrong option.
Is there a way to disagree with the majority, be open to new members, and not seem dangerous? Perhaps we could downplay our ambitions; to stop talking about improving the world, and pretend that we are just some kind of Mensa, a few geeks solving their harmless Bayesian equations, unconnected with the real world. Or we could make a semi-secret discussion forum; it would be open to anyone after overcoming a trivial inconvenience, and it would not be indexed by google. Then the best articles (judged by quality and PR impact) would be published on a public forum. Perhaps the articles should not appear all in the same place: everyone (including Eliezer) would have their own blog with their own articles, and LW would just contain links to them (like Digg). This would be an inconvenience for publishing; but we could provide some technical help for people who have problem starting their own website. Perhaps we should split LW to multiple websites, concerned with different topics: artificial intelligence, effective philantrophy, rationality, community forum, etc. -- All these ideas are about being less open, less direct. Which is dishonest per se, but perhaps this is what good PR means: lying in socially accepted ways; pretending what other people want you to pretend.
This could be a separate topic probably. And first we would have to solve what we want to archieve, and only then discuss how.
I don’t think you do, I think you consider most people to be (in some sense rightly) wrong or ignorant. Just the fact that you hold people to some standard (which you must do, if you say that they fail) means you don’t think of them as insane. If you’ve ever known someone with depression or who is bi-polar disorder, you know that you can’t tell them t snap out of it, or learn this or that, or just think it through. Even calling people insane, as an expression of contempt, is a way of holding them to a standard. But we don’t hold actually insane people to standards, and we don’t (unless we’re jerks) hold them in contempt. You don’t communicate the inconvenient truth to the insane. You don’t disagree or agree with the insane. The wrong, the ignorant, the evil, yes. But not the insane.
No one here (and I mean no one) actually thinks the world is full of insane people. That’s a bit of metaphor and hyperbole. If anyone seriously thought that, their behavior would be so radically strange (think ‘I am Legend’ or something), you’d probably find them locked up somewhere.
The claim that everyone else is insane doesn’t sound dangerous, it sounds resentful. Dangerous is not a problem. I don’t think we need to implement any of your ideas, because the issue is purely one of rhetoric. None of the ideas themselves are a problem, because there’s no problem with saying everyone else is wrong so long as you have either 1) results, or 2) good, persuasive, arguments. And if all you’ve got is (2), tone matters, because you can only persuade people who listen to you. There’s no reason at all to hide anything, or lie, or pretend or anything like that.
Speaking about typical indviduals, ignorant is a good word, insane is not. As you say, it makes sense trying to explain things to an ignorant person, not to an insane person. Individuals can be explained things with some degree of success. I agree with you on this.
The difference becomes less clear when dealing with groups of people, societies. Explaining things to a group of people, that is more often (as an anthropomorphism) like dealing with an insane person. Literally, the kind of person that hears you and understands your words, but then also hears “voices in their head” telling them it’s bad to think that way, that they should keep doing the stupid stuff they were doing regardless of the problems it brought them, etc. Except that these “voices” are the other people. -- But this probably just proves that societies are not individuals.
Yeah, having results would be good. The Friendly AI would be the best, but until then, we need some other kind of results.
So, an interesting task would be to make a list of results of the LW community that would impress outsiders. Put that into a flyer, and we have a nice PR tool.
That’s fair enough. I’d stay away from groups of people. Back in the day, they used to write without vowels, so that you could only really read something if you were either exceptionally literate or were being told what it said by a teacher. I say never communicate with more than a handful of people at once, but I suppose that’s not possible a lot of the time.
Perhaps it would be less confusing to treat a society as if it were a single organism, of which the people within it are analogous to cells rather than agents with minds of their own. I’m not sure how far such an approach would get but it might be interesting.
CFAR might be able to demonstrate such after a few more years of their workshops. I’m not sure how they’re measuring results, but I would be surprised if they were not doing so.
CFAR planned to do some statistics about how the minicamp attendees’ lives have changed after a year, using a control group of people who applied to minicamps but were not admitted. Not perfect, but pretty good. And the year from the first minicamps is approximately now (for me it will be in one month). But the samples are very small.
With regards to PR, I am not sure if this will work. I mean, even if the results are good, only the people who care about statistical results will be impressed by them. It’s a circular problem: you need to already have some rationality to be able to be impressed by rational arguments. -- Because you may also say: yeah, those guys are trying so hard, and I will just pray or think positively and the same results will come to me, too. And if they don’t, that just means I have to pray or think positively more. Or even: statistics doesn’t prove anything, I feel it in my heart that rationality is cold and can’t make anyone happy.
I think that people who don’t care about statistics are still likely to be impressed by vivid stories, not that I have any numbers to prove this.
I agree. But optimizing for good storytelling is different from optimizing for good science. A good scientific result would be like: “minicamp attendees are 12% more efficient in their lives, plus or minus 3.5%”. A good story would be “this awesome thing happened to an minicamp attendee” (ignoring the fact that equivalent thing happened to a person in the control group).
Maybe the best would be to publish both, and let readers pick their favourite part.
I’m sure they’ll be publishing both stories and statistics.
One more possibility: spin off instrumental rationality. Develop gradual introductions on how to think more clearly to improve your life.
How do other people use their whiteboards?
After having my old 90 x 60 whiteboard stashed down the side of my bed since I moved in, nearly two years ago, I finally got around to mounting it a couple of weeks ago. I am amazed at how well it compliments the various productivity infrastructure I’ve built up in the interim, to the point where I’m considering getting a second 120 x 90 whiteboard and mounting them next to each other to form an enormous FrankenBoard.
A couple of whiteboard practices I’ve taken to:
Repeated derivation of maths content I’m having trouble remembering. If there’s a proof or process I’m having trouble getting to stick, I’ll go through it on the board at spaced intervals. There seems to be a kinaesthetic aspect to using the whiteboard that I don’t have with pen and paper, so even if my brain is struggling to to remember what comes next, my fingers will probably have a good idea.
Unlike my other to-do list mechanisms, if I have a list item with a check box on the whiteboard, and I complete the item, I can immediately draw in a “stretch goal” check box on the same line. This turns into an enormous array of multicoloured check-boxes over time, which is both gratifying to look at and helpful when deciding what to work on next.
What are the advantages over pencil-and-paper? I can think of a couple, but would like to hear what a more frequent user says.
Firstly, a hypothesis: I am highly visual and like working with my hands. This may contribute considerably to any unusual benefit I get out of whiteboards.
So, advantages:
A whiteboard is mounted on the wall, and visible all of the time. I’m going to be reminded of what’s written on it more frequently than if it’s on a piece of paper or in a notebook. This is advantageous both for reminder/to-do items and for material I’m trying to learn or think about.
Instant erasure of errors. Smoosh and it’s gone. I find pencil erasers cumbersome and slow, and generally dislike pencil as a writing medium, so on paper my corrected errors become a mess of scribbled obliteration.
Being able to work with it like an artistic medium. If I’m working with graphs (either in the sense of plotted functions or the edge-and-node variety), I can edit it on the fly without having to resort to messy scribbles or obliterating it and starting again.
Not accumulating large piles of paper workings of varying (mostly very low) importance. I already have an unavoidably large amount of paper in my life, and reducing the overhead of processing it all is valuable.
The running themes here seem to be “I generate a lot of noisy clutter when I work, both physically and abstractly, and a whiteboard means I generate less”.
The physically larger my to-do list is, the more satisfying it feels to cross something off it. Erasing also works much better on whiteboard than with pencil and paper.
Aid in demonstrating things to others, social aesthetic value as a decoration, and personal aesthetic value. Also, erasing is way faster.
being large and in your field of view. Pieces of paper, even explicitly put places to remind you get lost under things or get shuffled away or are easy to ignore.
Sounds like a good system! What’s a “stretch goal”, if you don’t mind sharing?
It’s a term made popular by Kickstarter. If you achieve your initial goal and have resource left over, your “stretch goal” is what you do with the extra.
I will sometimes write things on a chalkboard that I’m trying to understand. I only have access to chalkboards, but I think that i would prefer them regardless—the chalk feels more substantial.
I no longer use whiteboards if I can help it; while I trained back my fine motor control after my stroke sufficiently well to perform most other related activities, writing legibly on a vertical surface in front of me is not something I specifically trained back and doesn’t seem to have “come for free” with other trained skills.
When I used them, I mostly used them for collaborative thinking about (a) to-do lists and (b) system engineering (e.g., what nodes in the system perform/receive what actions; how do those actions combine to form end-to-end flows).
I far prefer other tools, including pencil and paper, for non-collaborative tasks along those lines. And these days I mostly work on geographically distributed teams, so even collaborative work generally requires other tools anyway.
I pretty much only use whiteboards to communicate to others. For private purposes, I use pencil and paper or a text file in my Dropbox folder.
Do you know of a way to edit a text file in Dropbox from one’s iPhone?
Evernote is good for that purpose.
No. I use Android, and the Dropbox app here has a built-in text editor as well as allowing you to use a different one.
So I’m interested in taking up meditation, but I don’t know how/where to start. Is there a practical guide for beginners somewhere that you would recommend?
Mindfulness in Plain English is a good introduction to (one kind of) meditation practice.
It seems like most interested people end up practicing concentration or insight meditation by default (as indeed you will, if you read and follow the book). I would also recommend eventually looking into loving-kindness meditation. I’ve been trying it for a couple of weeks and I think it might be much more effective for someone who just wants a tool to improve quality of life (rather than wanting to be enlightened or something).
Loving-kindness meditation was one of the most easily accessible effective techniques for subverting intrusive anxiety I experimented with during my recovery. (There were more effective techniques, but I couldn’t always do them reliably.)
Have you seen the previous LW posts on the subject?
I looked through some of them, there’s a lot of theory and discussions, but I’m rather interested just in a basic step-by-step guide on what to do basically.
From Meditation, insight, and rationality (Part 2 of 3):
I found Daniel Ingram’s Mastering the Core Teachings of the Buddha a fun read.
I tried zazen for a few months: I like it and decided to start it again just this week. Here is straightfoward advice on what to do: https://www.youtube.com/watch?v=nsFlrdXVFgo
if you don’t want to watch the long youtube video read the following then skip to 8:20 where he explains how to think/what to do with your mind:
how to sit your body: cross legged or lotus—but lotus requires flexibility and isn’t necessary. straight spine, back and neck. rest your hands to make a ring shape. face a wall and shut your eyes. rock side to side a little then stop straight.
how to think: first few times—it’s initially very difficult to let your mind free itself of thoughts/chatter so a way to practice this is counting down slowly from 10, restarting if you stray from counting onto thinking about something else.
More grist for the hypothetical Journal of Negative Results
Scientist wants to publish replication failure. Nature won’t accept the article (even as a letter). So scientist retracts previous letter written in support of the non-replicated study.
Hypothetical?
Someone needs to redo that website before my eyes explode.
Just use your browser’s zoom.
While researching a forthcoming MIRI blog post, I came across the University of York’s Safety Critical Mailing List, which hosted an interesting discussion on the use of AI in safety-critical applications in 2000. The first post in the thread, from Ken Firth, reads:
I encountered this thread via an also-interesting technical report, Harper (2000).
That report also offers a handy case study in the challenges of designing intelligent control systems that operate “correctly” in the complexities of the real world:
[I made a request for job finding suggestions. I didn’t really want to leave details lying around indefinitely, to be honest, so, after a week, I edited it to this.]
For job searching, focus less on sending out applications and more on asking [professors | friends | friends of friends | mentors | parents | parents’ friends] if they know of anyone who’s hiring for [relevant field]. When they say no, ask if they know anyone else you should talk to. To generalize from one example, every job I’ve ever worked has come from some sort of connection. I found my current position through my mom’s dance instructor’s husband.
For figuring out what to do with your long-term future, there’s not much I can say without knowing your goals, but http://80000hours.org/ might or might not be relevant. If so, they’re willing to advise you one-on-one.
I would like to get better at telling stories in conversations. Usually when I tell a story, it’s very fact-based and I can tell that it’s pretty boring, even if it wasn’t for me. Are there any tips/tricks/heuristics I can implement that can transform a plain fact-based story into something more exciting?
It’s okay to lie a little bit. If you’re telling the story primarily to entertain, people won’t mind if you rearrange the order of events or leave out the boring bits.
Open with a hook. My style is to open with a deadpan delivery of the “punchline” without any context, e.g. “Quit my job today.” This cultivates curiosity.
Keep the end in mind. I find that this avoids wandering. It helps if you’ve anchored the story by “spoiling” the punch line. We all have that friend who tells rambling stories that don’t seem to have a point. That said -
Don’t bogart the conversation. If you’re interrupted, indulge the interruption, and bring the conversation back to your story if you can do so gracefully. It’s easy to get fixated on your story, and to become irritated because everybody won’t shut up. People detect this and it makes you look like an ass. Sometimes it works to get mock-irritated—“I was telling a story, dammit!”—if doing so feels right. Don’t force it.
Don’t get bogged down in quoting interactions verbatim. Nobody really cares what she said or what you said in what order.
Don’t care about getting all the details correctly. (Your first and last points.)
I know a person whose storytelling is painful to listen, because sooner or later they run into some irrelevant detail they can’t remember precisely, and then spend literally minutes trying to get that irrelevant detail right, despite the audience screaming at them that the detail is irrelevant and the story is already too long, so they should quickly move to the point.
Perhaps this could be another good advice: Start with short stories. Progress to longer ones only when you are good with the short ones.
Watch stand-up comedy. There’s lots of it on YouTube.
Just listening to and imitating the cadence of how professional comics speak is enough to boost one’s funniness by 2.3 Hickses.
A good piece of advice lukeprog gave me is to structure your story around an emotional arc. E.g. a story about an awesome show you went to is also a story about what you felt before, during, and after the show. A story about the life-cycle of a psychoactive parasite is also a story about a conflict between the clever parasite and the tragic host; or a story about your feelings of fascination and horror when you first learned about the parasite.
Join a pen and paper RPG group, it is the old trick of if you want to be better at something, spend a lot of time doing it. Easy story telling practice sessions every week.
Since I’m used to hearing Dutch Book arguments as the primary way of defending expected utility maximization, I was intrigued to read this passage (from here):
The Wakker 2010 reference is to a book; searching it for “dutch book” gets me the footnote
And looking up assignment 3.3.6 gets
Since I don’t really have the time or energy to work my way through a textbook, I thought that I’d ask people who understood decision theory better: exactly what is the issue, and how serious of a problem is this for somebody using the Dutch Book argument to argue for EU maximization?
The von Neumann-Morgenstern theorem isn’t a Dutch book argument, and the primary purpose of Dutch book arguments is to defend classical probability, not expected utility maximization. von Neumann-Morgenstern also assumes classical probability. Jaynes uses Cox’s theorem to defend classical probability rather than a Dutch book argument (he says something like using gambling to defend probability is uncouth).
I don’t really understand what issue the first reference you cite claims exists. It doesn’t seem to be what the second reference you cite is claiming.
I’m not really sure whether the parts of Wakker that I quoted are the parts that the first cite is referring, either—it could be that the first cite is talking about something completely different. That was the only part in Wakker that I could find that seemed possibly relevant, but then my search was extremely cursory, since I don’t really have the time to read through a 500-page book with dense technical material.
Wouldn’t this trivially go away by redenominating outcomes in utilons instead of dollars with diminishing marginal returns?
Then the bookie doesn’t always profit from your loss. I don’t know if that matters to you, though.
You can elicit probabilities from risk averse agents by taking the limit of arbitrarily small bets. There is an analogy with electro-magnetism, where people who want to give a positivist account of the electro-magnetic field say that it is defined by its effect on charged particles; but since the charges affect the field, one talks of an infinitesimal “test charge.”
All students including liberal arts students at Singapore’s new Yale-NUS College will take a new course in Quantitative Reasoning which John Baez had a hand in designing.
Baez writes that it will cover topics like this:
innumeracy, use of numbers in the media.
visualizing quantitative data.
cognitive biases, operationalization.
qualitative heuristics, cognitive biases, formal logic and mathematical proof.
formal logic, mathematical proofs.
probability, conditional probability (Bayes’ rule), gambling and odds.
decision trees, expected utility, optimal decisions and prospect theory.
sampling, uncertainty.
quantifying uncertainty, hypothesis testing, p-values and their limitations.
statistical power and significance levels, evaluating evidence.
correlation and causation, regression analysis.
John Baez, Quantitative Reasoning at Yale-NUS College
http://www.interfluidity.com/v2/4435.html
Related to this, there are a couple of professional philosophers around that are starting to take conspiracy theories seriously. Not just in the manner of critically analysing them, but also in the sense of how to actually make inferences about the existence of a conspiracy, how to contrast official theories and conspiracy theories, and how to reason with disinformation present.
One of these individuals is Matthew Dentith, who did his PhD In defence of conspiracy theories on these topics (and is in the process of writing a full book on the matter). The other is David Coady.
I’m running an Ideological Turing Test at my blog, and I’m looking for players. This year’s theme is sex and death, so the questions are about polyamory and euthanasia.
You can read the rules and sign up at the link, but, essentially, you answer the questions twice: once honestly, and the second time as you think an atheist or Christian (whichever you’re not) plausibly would. Then we read through the true and faux atheist answers and try to spot the fakes and see what assumptions players and judges made.
Anybody with tips for beginning an evaluation for the purpose of choosing between future career and academic choices? As far as I can tell, my values are as commonly held as the next fellow:
Felt Purpose—A frequent occurrence of situations that demonstrate, in unique ways, the positive effects of my past actions. I see this as being somewhere in the middle of a continuum, where on one end I’d have only rational reason to believe my actions were doing good (effective charity), and on the other only the feeling as if I were doing good, but less rational evidence (environmental volunteerism?).
Utilitarian Benefit—I do need a fair bit of the left-hand-side of the above continuum. Even if no feeling might leave me depressed, too much feeling without enough rational evidence would leave me feeling hollow and wrong. I might also expect an increase in altruism through personal development that will push me further in the direction of effectiveness.
Academic Fun—The feeling of discovery, that my work is on not-commonly-trodden path, and the realized ability to make novel contribution.
Social Fun—Being surrounded by people with widely varying backgrounds, providing direct opportunity to partake in new social situations, a fun going beyond anthropological interest. Ability to make friends of an intelligent, kind, and uplifting sort.
Artistic Outlet—The feeling that long-held and heavily inspected aspects of my psyche are finding expression, probably combined with the feeling that this expression is understood by like minds, and that those minds are being helped by it.
Financial Freedom—Money for me is less about buying objects so much as the freedom to do new things. Like travelling and inviting as many people as I want. Also, income reflects my real economic output, which is valuable in itself, with the benefit that I can put the profit toward effective charities.
Of course, this listing of values is the beginning of my self-evaluation. What I’m less keen on is where to find a listing of my options. I am a 24 year old computer science graduate, currently in the video game industry as a pipeline and graphics engineer working on AAA titles (as opposed to independent games). I have saved up for myself about 80,000 USD, and increase this savings at roughly $40,000 per year at my current job. I would have only minor qualms about relocating (within the country). I view myself as having a high aptitude for learning but a very limited working set. I tend to solve hard problems very cleverly and thoroughly, but find it difficult to maintain work on multiple hard problems at the same time.
Current options I’ve considered:
Switch jobs in-industry to reclaim novelty, and/or achieve a pay increase (“Got to move sideways to move upward.”)
Switch jobs out-of-industry (with retained CS focus, perhaps) to broaden interests, continue learning.
Return to academia (What study? What university?)
Form a startup with already-formed acquaintances. (Make an indie game? Other solvable problems with my skillset?)
Combination of above.
??
Would love comments, although interestingly typing this out has itself been a great help.
The impression I get is that games programming is underpaid and overworked relative to other styles of programming, because games are fun and the resulting halo effect dramatically increases the labor supply for the games industry as a whole. You may be able to make more working on Excel than Halo, but that’s an guess from the outside with only a bit of anecdotal backing. (This may not be true for your particular skillset; my impression is that the primary consumers of intense graphics are games and animation firms.)
This also would trade off felt purpose- even if you have trouble convincing yourself games are worthwhile, you’ll have a harder time convincing yourself Excel is worthwhile- for income, which may not be the right move, and would depend on the actual numbers involved. (It might be that decreasing your felt purpose by 1 on a ten point scale is not be worth an additional $10k a year, but is worth an additional $50k a year, to use arbitrary numbers as an example.)
My understanding is that grad school in computer science is only worthwhile if you want to be a professor (which I don’t think will fit your criteria as well as working in industry) or you’re looking for co-founders. Another thing to consider along similar lines is software mentorship programs for undergraduate students (here’s one in Austin, I imagine there’s probably one in Seattle)- it’s a great way for you to meet people that might be cofounder material, and see how they work, as well as getting social fun (and possibly academic fun).
I have started writing a Death Note fanfiction where the characters aren’t all as dumb as a bag of bricks (or one could say a rationalist fic) and… I need betas. The first chapter is available on http://www.fanfiction.net/s/9380249/1/Rationalising-Death and the second is pretty much written, but the first is confirmedly “funky” in writing and since I’m not a native English speaker I’m not sure I can actually pinpoint what exactly is wrong with it. Also I’d love the extra help.
Anyone interested? My email for contact is pedromvilar@gmail.com (and I also have a tumblr account, http://scientiststhesis.tumblr.com )
Initial observations: you are cribbing too heavily from MoR, your Light is too much like Harry, the focus on utility seems silly, and jumping straight to crypto reasoning for randomness is completely unmotivated by anything foregoing.
Definitely interested. I’ll send you an email.
I’ll help out if you want. I sent you an email.
Politicians have a lot of power in society. How much good could a politician well-acquainted with x-risk do? One way such a politician could do good is by helping direct funds to MIRI. However, this is something an individual with a lot of money (successful in Silicon Valley or on Wall Street) could do as well.
Should one who wants to make a large positive impact on society go into politics over more “conventional” methods of effective altruism (becoming rich somewhere else or working for a high-impact organization)?
If you think about it, this is quite a striking statement about the LW community’s implicit beliefs.
I agree with something; I am just not sure enough whether I agree with you. Could you please make those implicit beliefs a bit more explicit?
The implicit assumption is that people don’t go into politics because they want (like, really, effectively, goal-oriented, outcome-based want) to make large positive impacts on society. We read a statement with that assumption quite plainly baked into it, and it doesn’t seem weird. The fact it doesn’t seem weird does itself seem kind of weird.
Sam Nunn was a US senator who wanted to buy surplus nuclear weapons from Russia, rather than risk them wandering off. He was unable to convince the rest of the government to pay for it, but he was able to convince the government to let Buffet and Turner pay for it. He has since decided that he can do more to save the world outside of government.
Added: But, he was rumored to be Secretary of Defense under Gore. So he thought some positions of government were more useful than others.
How?
I wonder if a good way to do good as a politician would be to try to be effective and popular during your term, then work as a lobbyist afterwards and lobby for causes you support (like x-risk reduction, prediction market legalization, and whatnot).
I suspect my model of the method used to allocate government funding may be oversimplified/incorrect altogether, but I am under the impression that those serving on the House Science Committee have a significant say in where funds are allocated for scientific research. Given that some members of this committee do not believe in evolution and do not believe in man-made climate change, it seems that the potential social good of becoming a successful politician could be very high.
My impression is that the House Science Committee is too high to aim for. A more plausible scenario would be MIRI convincing someone at the NSF to give them grants.
Alas, I think you are aiming too high. If every politician believed that all basic research had positive net expected-value, that would naturally benefit research of the type that MIRI thinks should be conducted.
Once that is the case, movement towards MIRI as a research grant recipient might be worth the effort of Joe Citizen. Until then, I’m skeptical that advocating for MIRI specifically is likely to be worth the effort, politically.
My question was how you could direct funds to MIRI, not whether there were stupid House Science Committee members. I’m suggesting that directing funds to MIRI in particular might not be politically feasible.
Why do you think MIRI is worse politically than any other basic research in controversial or plausibly-controversial topics?
I ask because lots of controversial areas receive government funded grants.
I have concluded that many of the problems in my life are the result of being insufficiently impulsive. As soon as I notice a desire to do something, I more or less reflexively convince myself that it is a bad idea to do just now. How can I go about increasing my impulsivity? I want to change this as a persistent character trait, so while ethanol works in the short run it is not the solution I am looking for.
This sounds more like low expectancy than lack of impulse. Impulse can make you jump off your feet to do something you want to do, but it can just as quickly distract you from doing what you want to do. Check this out. Perhaps what you might need is to increase optimism and not impulsiveness.
As for the desire of doing something, try to convince yourself that it doesn’t need to be perfectly thought out before doing it. For example, if you are starting a business you could be bogged down by trying to perfectly plan everything out and end up doing nothing. Instead give yourself 24 hours to start a business. Its an unreasonable request, but you would be surprised at how far you can get.
Interesting. I had been thinking that, since the things I persuade myself out of doing are usually unproductive, my behavior was not the same as procrastination.
I’ve been using “structured procrastination” to decent effect for getting productive-but-not-work activities done (cleaning, etc.); maybe I should add unproductive activities like “walk to the park” to my list (that is the most common thing I keep reflexively arguing myself out of for no good reason). Will consider “mental contrasting” as well.
Beeminder it. Seriously.
Is there something that makes you more likely to do things? Try to exploit that somehow.
For example, I am more likely to start doing things when it feels to me like a competition. So I think about someone else doing something similar to what I want to do, and then think: “well, I could make it even better.”
Aaron Winborn: Monday was my 46th birthday and likely my last. Anything awesome I should try after I die?
What’s actually known about women’s biological clocks?
Pretty much the usual if someone looks closely at a commonly held belief about a medical issue. The usual dramatic belief that fertility drops off sharply at 35 is based on French birth records from 1670 to 1830. Human fertility is very hard to study. Women who are trying to have their first child at age forty may be less fertile than average. And so on.
Zach Alexander just posted a reverse engineering of the Soylent recipe. It looks pretty legit, and reasonably easy to put together.
Confirmed. Tasty too. I got the supplies on the way home from work today, sans olive oil (which I had already) and potassium (which I ordered online). It’s not the cheapest way in the world to eat—it cost around $80 including the potassium. Most of the supplies will last 30 days, but some (oat flour, cocoa, and soy protein) will run out sooner. The potassium should last longer. A 30-day supply of everything would probably be around $100-$120.
In last year’s survey, someone likened Less-Wrong rationalism to Ayn Rand’s Objectivism. Rand once summed up her philosophy in the following series of soundbites: “Metaphysics, objective reality. Epistemology, reason. Ethics, self-interest. Politics, capitalism.” What would the analogous summary of the LW philosophy be?
In the end, I found that the simplest way to sum it up was to cite particular thinkers: “Metaphysics, Everett. Epistemology, Bayes. Ethics, Bentham. Politics, Vinge.”
A few comments:
For metaphysics… I considered “multiverse” since it suggests not only Everett, but also the broader scheme of Tegmark, which is clearly popular here. Also, “naturalism”, but that is less informative.
For epistemology… maybe “Jaynes” is an alternative. Or “overcoming cognitive bias”. Or just “science”; except that the Sequences contain a post saying that Bayes trumps Science.
For ethics… “Bentham” is an anodyne choice. I was trying to suggest utilitarianism. If there was a single well-known thinker who exemplifies “effective altruism”, I would have gone with them instead… Originally I said “CEV” here; but CEV is really just a guess at what the ethics of a friendly AI should be.
For politics… Originally, I had “FAI” as the answer here. That may seem odd—friendly AI is not presented as a political doctrine or opinion—but the paradigm is that AGI will determine the future of the world, and FAI is the answer to the challenge of AGI. These are political concerns, even if the ideal for FAI theory would be to arrive at conclusions about what ought to happen, that become as uncontroversial and “nonpartisan” as the equations of gravity. I chose Vernor Vinge as the iconic name to use here; I suppose one could use Kurzweil. Alternatively, one could argue that LW’s cardinal metapolitical framework is “existential risk”, rather than FAI vs UFAI.
I wonder whether more people will think of Julian Jaynes rather than E. T. Jaynes if you just rattle “Everett, Jaynes, Bentham, Vinge” at them. This does seem like a very nice ultra-concise description though.
In this context, this should surely be “Epistemology, Jaynes”—attributing the LessWrong conception of “Bayesianism” to the Rev Bayes is a bit of a stretch. Though it’s unclear what Jaynes would have thought of the claim that Bayes trumps science.
A nice feature on the Bitcoin-accepting pub in London.
Over the past several months, I have been practicing a new habit: whenever I have a ‘good’ idea, I write it down. (‘Good’ being used very loosely.)
This is a very simple procedure but it seems to have several benefits. First, I began noticing that I remembered having a ‘good’ idea but not being able to remember what it was. I now notice this behavior much more strikingly and it causes some small amount of distressing thinking about what I might have forgotten. Writing it down relieves that worry. Second, I can refer back to it later. So far nothing significant has come out of this, but I like having the option and I have gone back to note that some of my once-though ‘good’ ideas don’t hold up on second thought. This is useful information about yourself to have. Third, it encourages me to have more good ideas. For a while I tried to write down one a day (possibly using a long-term ‘cached’ idea I had been floating for a while if I didn’t have anything good that day). Fourthly, writings things down even just for myself helps me to really get a clear idea of it.
I’m sure there are many suggestions which are similar to this or encompass it. Obviously this is similar to having a journal and probably shares some of the benefits. This has the advantage of being extremely simple and takes hardly any time at all and is only done when it has an obvious benefit.
Getting Things Done suggests writing down everything you need to do as soon as you realize you need to do it, and this can include following up on good ideas.
So with the PRISM program outed, the main thrust of discussions is about its legality and consequences. But what interests me is a rather non-political issue of general competence. One would think that the NSA and in general any security agency would take risk assessment and mitigation seriously. And having its cover blown ought to be somewhere close to the top of the list of critical risks. Yet the obvious weak point of letting the outsiders with appropriate clearance access deep inside the areas with compromising info was apparently never addressed.
Even the standard approach of having tiered access for everyone regardless of the clearance level, and automatically checking and flagging every unusual escalation was either not implemented or cleverly subverted by a low-level admin. And given the Bradley Manning security breach, one would expect even a half-decent internal security officer to be rather paranoid. And who knows what other low-ranking admins quietly did and probably are doing with what information and for what purpose and in what organizations.
I am wondering, is it reasonable to assume that the people responsible for the integrity of a spy agency are this inept? Or is what we see now is somewhere low on the list of risks and is being handled according to plan? Or, if you go deeper in the conspiracy mode, is orchestrated for some non-obvious reason?
Personally, I hope it’s the last possibility, because I’d take competency over ineptitude anytime, nefarious purposes or not.
So, I’m not an expert but going from a couple of news articles and HN discussion I get the impression that Snowden actually did require that level of access to do his job and that its enough of a sellers market for people with his general class of IT skills that you can’t really get technically competent people if you add to many additional constraints.
First, no visible constraints are required, just routine logging, auditing and automated access pattern analysis. Possibly hardware- or kernel-based USB port locking, WiFi monitoring, that kind of thing. Second, as for it being a”sellers market”, that only means that someone decided that saving a few millions a year in wages was more important than reducing the chances of the breach.
You need IT people to implement the logging and auditing to track other people. However you automate it, even automated systems require human maintenance and guidance.
Sysadmins are a natural root of trust. You can’t run a large-scale computing operation without trusting the higher-level sysadmins. Even if you log and audit all operations done by all users (the number of auditors required scales as the number of users), sysadmins always have ways of using others’ user accounts.
This isn’t a matter of paying more to hire better sysadmins. The value you want is loyalty to the organization, but you can’t measure it directly and sometimes it changes over time as the person learns more about the organization from the inside. You can’t buy more loyalty and prevent leaks by spending more money.
So how can you spend more money, once you’ve hired the most technically competent people available? I suppose you could hire two or more teams and have them check each other’s work. A sort of constant surveillance and penetration testing effort on your own networks. But I don’t think this would work well (admittedly I’ve never seen such a thing tried at first hand).
For comparison, take ordinary programming. When it’s important to get a program right, a coder or a whole team might be tasked with reading the code others wrote. Not just “code review” meetings, but in-depth reviews of the codebase. And yet it’s known that this catches few bugs, and that it’s better to do black-box QA and testing afterwards.
Now imagine trying to catch a deliberate bug or backdoor written by some clever programmer on the team. That would be much much harder, since unlike an ordinary bug, it’s deliberately hidden. (And black-box testing wouldn’t help at all if you’re looking for a hidden backdoor, as an example.)
Sysadmin work is even harder to secure than this. Thousands of different systems and configurations. Lots of different scripts and settings everywhere. Lots of places to hide things. No equivalent of a “clean build environment”—half the time things are done directly in production (because you need to fix things, and who has a test environment for something as idiosyncratic as a user’s Windows box?). No centralized repository containing all the code, searchable and with a backed-up tamper-proof history.
I was a programmer in the Israeli army and I know other people from different IT departments there. Not very relevant to whatever may be going on in the NSA, but I did learn this. It’s barely possible, with huge amounts of skill, money and effort, for a good sysadmin team to secure, maintain and audit a highly classified network against its thousands of ordinary, unprivileged users.
I’m pretty sure that securing a network against its own sysadmins—or even against any single rogue sysadmin—is a doomed effort.
(A lot of things that work great in theory fall apart the minute some low-clearance helpdesk guy has to help a high-clearance general mangle a document in Word. Only the helpdesk guy knows how to apply the desired font; only the general is allowed to read the text. Clearance does not equal system user privilege, either.)
These are all valid points, and I defer to your expertise in the matter. Now do you think that Snowden (who was a contractor, not an in-house “high-level sysadmin”, and certainly not an application programmer) cleverly evaded the logging and flagging software and human audit for some time, or that the software in question was never there to begin with? Consider:
You say
Against the top-level in-house guys, probably. Against a hired contractor? I don’t think so. What do you think are the odds that his job required accessing these lists and copying them?
I’m sorry, I assumed he was a sysadmin or equivalent. I got confused about this at some point.
A contractor is like an ordinary user. It’s possible to secure a network against malicious users, although very difficult. However, it requires that all the insiders be united in a thoroughly enforced preference of security over convenience. In practice, convenience often wins, bringing in insecurity.
Well, his own words that you quote (“that is why I accepted that position”) imply that this access was required to do the job, since he knew he would have access before accepting the job. The question then becomes, could he have been stopped from copying files outside the system? Was copying (some) files outside the system part of his job? Etc. (He could certainly have just memorized all the relevant data and written it down at home, but then he would not have had documentary proof, flimsy and trivially fakeable though it is.)
It’s possible to defend against this, but hard—sometimes extremely hard. It’s hard to say more without knowing what his actual day-to-day job as a contractor was. However, the biggest enemy of security of this kind is generally convenience (the eternal trade-off to security), followed by competence, and then followed distantly by money.
http://qz.com/96054/english-is-no-longer-the-language-of-the-web/
Plenty for LW here—not just that English is a steadily declining fraction of online material, but the difficulties of finding out what proportion of the web is in what language, and the process by which more and more of the web is in more and more languages.
Translation is costly, but the cost of translating a popular article may be smaller than the total costs of all people who would otherwise have to read it in a foreign language.
As a model, let’s say that reading an English article by a person with English as a first language, costs 1 point of energy. Reading the same article by a person with English as a second language, may cost 2 to 10 points, depending on how good that person is in English. Translating the article to another language would be perhaps 100 or 1000 points. Assuming these numbers, once you have enough readers with a native language X, it becomes globally cheaper to provide an official translation to X in addition to the original English article.
In the other direction, it may be useful to provide an ability to write articles in X, and then translate the popular ones to English. This is probably even more useful, because writing in a foreign language is more costly than reading. On the other hand, the filtering of the good articles (worth translating) must be done in the original language X, so there must already be a strong community.
On the other hand, if there are more versions of the article, the discussion will be split. But again, to some people this reduces the costs of participating. (Perhaps the best rated comments could be translated by volunteers too?)
Perhaps this is a bias when thinking about languages: One language would be cheaper than many languages. But in a situation where we already have people speaking fluently in different languages, translation may be cheaper than using one language. Especially if we can have a good filter for translating only the best things.
I think the situation is grainier than you imply—there has to be a motivation to translate, whether for love or money. “Worth translating” isn’t floating out there in the aether. Instead, translation only happens if someone cares enough to make it happen.
Take the blog which has criticism of American and Israeli politicians in English and criticism of Arab politicians in Arabic. Who’s going to translate the Arabic parts into English? Someone who’s fascinated by Arab politics? Someone who wants to get the politicians and/or the blogger into trouble? There might be other motivations as well, but whatever it is will have to be fairly strong, especially if much of the blog is going to get translated.
I think the point of the article is that everyone adopting the same language has such a high upfront cost that it isn’t going to happen any time soon, and it’s interesting to look at the consequences, including the unexpected consequence that the web lowers the barriers to connection so much that minority languages are less disadvantaged than they used to be.
What advances in IA would be needed to make learning languages easier?
What license is LW articles under? If they are CC you could just submit anything you want translated to Duolingo.
Edit: That only gives you five European languages, but it might be a start.
Old articles (eg) are unlabeled, thus not licensed for any other use, but new articles (like this open thread) have a CC icon saying “by attribution.”
AFAIK, LW articles are considered copyright of the original author; it’s the LW wiki which is supposed to be CC.
I am very interested in higher order theory of mind (ToM) tests for adults to differentiate those with high theory of mind quotient if you will. My hypothesis is that people with strong theory of mind are better at sales – I have an interest in both. Most tests I find online are meant to test children and for Asperger’s Syndrome, what I want are complex questions and problems.
I recently saw a highly upvoted comment on reddit that stated ”...Mifune, destroying the top black belts...” and cited this video However, I believe OP has misread the situation. Mifune is highly respected and is in a room full of spectators that fully expect him to come out on top when sparing, or at least would feel embarrassed for Mifune if he didn’t. The pretense is that everyone is trying their hardest to throw Mifune but he is so good he just twirls around them and that it is not a demonstration—it’s real sparing. OP and those who upvoted, failed to put themselves accurately in the role of the students in that room and think ” gee I totally don’t want to be that guy that throws the old man down.” The students are conforming, whether they believe it or not.
Any 4 year old can pass the false belief test but the Mifune video is a lot more subtle and complex. There are intentions involved, there is also each student’s knowledge of other student’s intentions, conformity, and self-delusion. The huge man being thrown by Mifune might say that he really believed he was trying his hardest not the be thrown, but ToM isn’t just about what others believe it’s also about accurately predicting other peoples actions, why they did it, and what they think they believe they did it for.
I am trying to compile a list of such examples, and would greatly appreciate it if anyone could add to this conversation by agree/disagreeing with what I have said, and especially, provide some examples of more complex theory of mind problems.
Political thrillers as a genre and some aspects of real life politics are a lot about theory of mind. The multilevelled effect of thinking how someone will act based on how you or a third party will act, ad infinitum.
I’m not sure exactly what you mean by theory of mind though. It seems a different skill to model how a theoretical rational agent would behave in a certain situation (as we do when discussing the prisoners dilemma or related logic puzzles) or modeling how a particular human being will behave (e.g. Alice tends to underestimate herself, Bob is overly cautious).
Showing Compassion To Designated Enemies; A Punishable Offense
There’s this very interesting trope being forged on TVT, and I found it very interesting from a rationalist standpoint, especially the examples involving God… What an asshole.
Interesting link. Again from a rationalist standpoint it seems to be the correct move (at times, with the same conditions that apply to punishment in general as well as a few others).
As someone who’s studied The Art Of War intensively, and to whom “defeat means friendship” (as long as the opposition does feel thoroughly defeated) is a matter of course, I find that incentivizing unforgiveness and wanton destruction (I mean, seriously, they even had to kill the cattle? How is that in any way practical or rational?) is not only aesthetically dissonant, but wasteful and silly as hell. Going out of one’s way to ensure some don’t get a proper ritual, or otherwise kicking the defeated while they are down, also strikes me as disgusting, wasteful, and, frankly, cartoonishly over the top.
They should make a piece of fiction with a villain whose actions mirror those of YHWH perfectly. Have him blow a fortress to pieces and then demand that everything that breathes within be slaughtered. Have him kill some of his followers for disobeying some arbitrary rule, then have him kill many more just because they complained about it.
With his mind.
That’s not an omnibenevolent deity, that’s a fucking Dungeon Keeper.
See how many people notice the references. See how many identify this overlord as a vile villain before being informed he’s patterned after The God.
Do you remember the other parts too? The parts that don’t feel so warm and fuzzy? Or other effective military strategies? Defeat rather seldom means friendship when it comes to pre-established enemies, whether in The Art of War or outside of it. The generals discussed in The Art of War commanded their soldiers to kill other soldiers (and their leaders) and conquer strategic resources. The Art of War gives instructions on how to do so more effectively, with more compliance from soldiers and, where possible, in such a way that the enemy does not fight back significantly.
Yes, God is silly as well as a dick (and non-existent). But looking at the strategy from the perspective of instrumental rationality interest rather than from the perspective of indignant atheism the cattle-killing silliness is not especially relevant. Most people agree that it is better to keep the cattle than kill them. What is more interesting is just when it is instrumentally rational to apply force (be that political, economic or physical) against those that are assisting a particularly troublesome enemy.
The details matter a lot of course. There are cases where it obvious that is instrumentally rational to kill those who are assisting the enemy while there are cases where that would be outright self destructive. On one side of the line there is the sole provider of particularly advanced weaponry to the enemy, who does not trade with you and who has no significant social alliances and on the other side there is an welfare charity who provides medical assistance indiscriminately worldwide, is loved by all and protected by an alliance of powerful nations. In between things are less simple.
I don’t recall ever mentioning anything fuzzy or warm; It’s simply a pragmatic matter of taking the human factor into account. You simply try to fight and destroy as little as possible because it’s expensive, risky, and creates ill will in the long tern. Napoleon and the armies of the Spanish Empire are excellent examples of how to win every battle, piss everyone off, and never win the peace.
Of course, if you do need to crush, kill, destroy, do it quickly and decisively, with no hesitation or pussyfooting. Therein lies the difference between being respectably compassionate, and being a sentimental fool begging to be abused.
The Art of War isn’t just about winning battles or wars, it’s about winning the peace that comes afterwards; it’s not just about beating your foe, but about getting them to stay beaten, and, in fact, help you out.
And, in particular, Scorched Earth tactics are extremely costly, and they are only effective in very specific circumstances.
As for the Bible examples cited there, I do not see how they are practical in any way, shape, or form. I can see the point of some of the other examples, but most of them are about helping out or standing up for someone who has been reduced to complete harmlessness and can’t be a threat anymore. This is most egregious when the defeated enemy in question is a freaking corpse.
God forbid I find myself defending the morality of the Hebrew Bible, but it seems to me you’re making a claim here (i.e. by implication, that the behavior of the Israelites was impractical/silly/evil) from a very poor epistemic position. The details of warfare, religious ritual, and the politics of conquest of that period are thin, and it’s not even clear that the story in Joshua (for example) represents an historical event (we have, and expect, no archeological record of this war), and so the practical purpose of the passage you cite may be entirely other than recommending or reporting a certain mode of warfare. Even if it is making such a report or recommendation, we simply lack the details to evaluate it.
Essentially, we have no idea whether or not what’s being described is silly or cunning or even moral or immoral (unless maybe we’re deontologists, but I doubt that). Speaking so confidently about something where confidence is so ill warrented is often a symptom of being mind-killed on a particular subject. And we should expect that as atheists, we are very, very likely to be mind-killed about biblical historiography (maybe right, as well, but mind-killed nonetheless).
Pointing out the horrors of the bible is a worthwhile way to put the morality of theists in tension with their holy book. But once that rhetorical work is done, the epistemically cautious consequentialist has absolutely no business throwing invective at the bible without a thorough study of the period being written about. Lord knows why you’d bother with that though.
″ the behavior of the Israelites was impractical/silly/evil”
I made no such claim, unless you mean the fictional Hebrews in the book rather than the collection of tribes the Romans diaspora’d many, many centuries later. Even then, what would you expect of the poor guys, when they’re being terrorized and cajoled into being evil by Kira-on-steroids?
However, from what we know of the behaviour of the Judeans under Roman rule, practicality and pragmatism weren’t top priorities for them, and their Scripture probably wasn’t a very sane source of advice on that matter.
More relevantly, some people still (claim to) take their ethics from the Bible, the Old Testament is very popular in the USA, and rhetorics of an Angry God Striking Down Evil and Smiting The Heathen, and of Lambs not Going Astray from Flocks (what a horrifying metaphor, being compared to cattle, of the stupidest sort no less!) are still floating around, influencing the way politics is done, whispering in the subconscious and blaring in the loudspeakers.
That is, of course, not the only culture that was influenced by a book that not only advocates genocide, but demands that, when it be done, it be carried out thoroughly. The Massacre of St. Barthelemy, some of the actions of Cromwell, and think of the horrifying, nauseating, vertiginous irony of Old Testament memes having been a factor, no matter how small, in the intellectual genesis of the Final Solution.
I mean the Israelites! That’s what these people, whose historicity is not in doubt, called themselves. They’re the ones that wrote the bulk of the biblical material between around 900 and 587 BCE.
Maybe, though this strikes me as conjecture. I also don’t see how it’s related to claims you seem to be making about the authors of the bible and their people.
You know what, get back to me on the historicity of Hebrews after reading this. I’m not adverse to shifting my priors on that topic; please refer me to a work that does not refer to the Bible as a starting point for its hypotheses, if that’s at all possible.
Until I have a Bible-independent framework on how to think of the ethnic conglomerate that claim to be the Descent of Israel, I prefer to assume it is all fiction as a working hypothesis, and start from there.
This is why also why I am reticent to call them Israelites, despite them calling themselves something like that, just the same way I wouldn’t call Arabs Ismaelites; I doubt that Israel/Jacob, Isac, Ishmael or Abraham existed, and I doubt either group’s direct descent from them. I certainly doubt that any human gained the title of Israel after wrestling with God and winning.
As for them calling themselves Israelites, allow me to be a little pedantic here; they called themselves B’nei Yisrael; Israelites is a greek term.
I see the point that post is making, but I’m not just blowing air here. I have a degree in near-eastern history, and I studied with an archeologist who works on this period. None of us were theists, or remotely interested in defending or even discussing any modern religion. The historical books of the Hebrew bible are a relatively reliable historical record, so far as we can tell, but the fact is we just don’t have that much detail about the period in which it was written, so mostly we just don’t know. Too many of our sources are (as EY points out) singular. However the historicity of the first-temple (900-587 BCE) Israelites very roughly as we find them in the HB is not really subject to much doubt. There are people who argue that the whole bible was written much later, and the history of Israel was just made up, but this theory is taken about as seriously by archeologists and historians as is ID by biologists. Needless to say, pretty much everything from the Torah that’s plausible (like the period of slavery) is pretty much unconfirmable. And no one takes seriously the implausible stuff, like Abraham or Noah.
I’m throwing authority at you here for two reasons. 1) the real argument consists in taking you through a bunch of archeology and historiography and I don’t feel like taking the time and 2) neither do you. You don’t, I suspect, actually care at all about first temple Israelite culture. You care about how modern Abrahamic religions are false and politically destructive. Granted! But that claim doesn’t have anything to do with history, and thinking that arguments against modern theists constitute an understanding of an ancient culture is not justifiable.
My real point however was one of caution. You’re exactly right to point out that by the standards of Christians or Jews or Muslims, the god of the Hebrew bible is savage. But you have no empirical standing to make claims about the morality or practicality of first temple Israelites, because pretty much no one does.
Yeah, who knows. But I call them Israelites because they called themselves that. I see no reason to make a point of it. And ‘Israelites’ may happen to have been a Greek term, but today it’s just the way you translate that Hebrew phrase into English.
You make some very good points and provide me with plausible background for them, so I’ll update on that.
Still, I don’t remember judging the morality of “first temple Israelites”. If I had to emit a conjecture based on currently available evidence, I’d say that the very fact that their religious books threatened to punish them for their compassion meant that they were, in fact, quite capable of compassion.
My working hypothesis is that they were not-evil-mutant not-chosen-by-god normal humans whose morality had to be religiously/ideologically turned off to help them perform atrocities. All their religion and the excesses, both in history and myth, are explainable by a sum of perfectly standard, naturally-arising human biases that manifested a certain way.
As for the lack of Art-Of-War-Compliance of the mythical group featured in the Bible, it would be as unfair to fault them for that as it would be to complain that they didn’t use some other rational-practical form of thought; that kind of stuff only seems obvious in retrospect. (This is so true that I currently have it as a rule of thumb that, if something doesn’t seem obvious in retrospect, and still seems wondrous and amazing, it means either that I didn’t fully understand it or that there’s something wrong with it).
Okay, I’ll forget about the ‘judging the morality of first temple Israelites’ thing. For fun, let’s talk about the Joshua ‘kill all the cattle and everything that breathes’ story. I’m going to make an educated guess as to the reasoning behind some of Joshua’s behavior on the basis of what I know about warfare several centuries after the time of the action of the story (when, in any case, this story was probably written).
Joshua is the story of the eponymous warlord of the Israelites after their arrival in the Levant and after the death of Moses. The Israelites had been living as nomads and had decided, for whatever reason, that the Levant was the place they would settle down. Unfortunately, the Levant was occupied and controlled from several city-states ruled by kings. Joshua’s army totally annihilates (down to the cattle) a couple of cities, Jericho and Hazor. The question is why.
The Israelites had until now been living as a nomadic tribe, moving through pretty poor territory and subsisting largely by pillage or by the contributions of or extorsions from allies. This means that Joshua’s army has no redoubt, and no consistant source of food or men or materials for fighting. His aim is to settle the Levant permanently, and to do so he has to oust the occupying people.
This means he cannot tolerate a series of long sieges: his army is living off the crops sown by the people he is attacking and if his invasion of a certain territory lasts more than a year, his army will starve. People need to surrender, and surrender immediately. Joshua has to find a way to communicate this message to his enemies, and in these days the only form of mass communication is to do something interesting enough to be gossiped about.
What do you do if you want to convince a whole region full of people that the game has changed, and that you’re no longer pillaging and threatening? So long as people think you’re coming for wealth and food, they’ll fight you because they think you’re making a cost/benefit analysis: if they cause you more trouble then their cattle are worth, you’ll go away. So long as everyone is thinking about warfare in terms of materials gained and lost, you’re pirate and a nomad and they’re the homeowners. But you’re trying to move in, and quickly, so you need to send a serious message.
The way to do this is to kill every living thing in a city. Everything. If you keep the cattle, then people will think you’re after the cattle. But if you ‘offer the city up to god’ and kill every living thing, that’s when people stop thinking of you as a pirate. That’s when they start realizing that they need to leave, and leave now. Because you’re not going to be satisfied with wealth, and you’re not going to take your time. You’re the unstoppable terror, and you’re here for good.
Joshua sends this message twice. First when he arrives in the region, with Jericho. Second, when the kings of the northern part of the Levant (the really nice part) get together to fight back, with Hazor. Fighting back is not allowed. Each time, the total annihilation was about sending a message, about saying ‘Stop fighting. We’re not going to be bought off, or sated by plunder. It’s over for you. Get. Out.’ Again, Joshua needs to send this message fast and loud because he has, tops, a few years before every Israelite is a slave or dead or scattered to foreign parts. The enemy needs to feel like it’s fighting a storm, or a god. Something with no pity, and no mercy, something that does not rest or negotiate.
I’m no student of strategy, but this doesn’t seem to me to be foolish or irrational. It also doesn’t seem to me that this involves anything like ‘turning off your morality’. Was any of this immoral? I dunno, this is kind of how warfare works, and for Joshua, it was this or death. It seems to me to be a well thought out strategy, and one that was very successful. Within a generation, the Israelites ruled the Levant. Within five, they were one of the greatest regional powers, capable of sitting at the table with Egypt and Babylon. And the civilization they established became one of the greatest cultural heavyweights in history, probably matched in the ancient world only by vedic India and classical Athens.
Yes, you’ve reinvented the classic economic/game-theoretic justification for total war—pour encourager les autres. This reasoning was more or less the explicit goal of the Mongols when they did things like build pyramids of skulls, and I’ve read economic analyses of New World pirates in which the Jolly Roger served a similar intimidation function. The tactic works best when coordination is hard, because if the intended victims can coordinate, such extremism may prompt the formation of an effective alliance against oneself even by parties who would’ve preferred to remain neutral.
This in fact seems to have happened to Joshua several times, though he managed to fight his way out of it both by way of some powerful alliances of his own, and by taking the defensive in these exchanges. One of his major advantages seems to have been that though he could not outlast his enemies year-to-year (having no city of his own), he could always outlast them within a given year, since the locals had to return home to plant and harvest crops and he didn’t. It was probably touch and go for a bit.
… You’ve succeeded at breaking through my rationality. I am so horrified by this line of thought that I cannot even begin to try to pick it apart. I’ll have to let this fester in my subconscious mind for a while, until all the screaming dies out and productive thought can take place.
The premise, inimical to the modern ear, that makes all of the above rational is ‘nothing matters but my people. Nothing.’ The problem with that principle is not obvious, however. We’re talking about a period which long predates the earliest emergence of the idea of a common humanity. That idea is hard won and hard kept and not at all obvious. I only half believe it myself: I’d drive a bulldozer through a crowd of people to save my baby son. I wouldn’t hesitate one tiny little bit.
ETA: But it’s not as if they didn’t think about this kind of thing quite deeply either. Once you forget about the relationship the bible has to modern religions, I think you can see it for the extremely interesting and profound book that it is (when it’s not being super boring, anyway). The bible, especially Genesis, is anything but moralizing. It’s very much a story about what it means to be part of a family, and this isn’t all happiness and roses. The first thing that happens to the mythical original family is fratricide, though the characters of genesis do slowly, over generations, figure out how not to destroy each other the first chance they get. The bible chronicles a profound struggle with morality and identity in a world (by our standards) so deadly and strange we’re often just unable to believe it. The bible’s status today as a fixed and univocal moral tablet is both absurd and irrelevant to the actual book.
And these are the people we understand and identify with enough to hold them to moral standards at all. The Babylonians were...like aliens. There’s a period before the Persians came where the following is all we know.
1) The most fertile region in the near east was suddenly and totally depopulated. 2) Every religious object in the entire region was moved to Babylon and covered in sacrifices.
There’s no report of a plague, or famine, or anything. That’s all we know. A decade later, the Persians came and took the city without any real struggle, and that was the end of Babylon.
What’s worse, going through another pregnancy and another time and resources teaching and caring for a new child, or murdering a truckload of people to preserve the one that already exists? Unless these are exceptionally evil people, the answer should be obvious.
Those aren’t accurate descriptions of the options.
You’re the one who lay down this thought experiment: it falls upon you to also accurately describe the options.
Just for the record, I do care: anthropology is, I believe, an utterly curcial subject, and understanding what humans are capable of, how they invented different systems and methods to live together and apart, to associate and to resolve conflict—I think that’s absolutely essential if one wants to look at the world and at oneself with clear eyes.
I just found this nice quote on The Last Conformer which is supposed to prove that betting on major events is qualitatively different from betting on coinflips:
It seems to me that the problem exists for coinflips as well. If I flip a coin and don’t show you the result, your beliefs about the coin are probably 50⁄50. But if I offer you to bet at 50⁄50 odds that the coin came up heads, you’ll probably refuse, because I know which way the coin came up and you don’t.
According to the Dutch book argument for rationality, we are supposed to accept either side of any bet offered at the odds corresponding to our beliefs. In my example, that idea breaks down, because getting the offer is evidence that you shouldn’t take the bet. But then how do we formulate the Dutch book argument?
The selection effect you mention only applies to offering bets, not accepting them. If Alice announces her betting odds and then Bob decides which side of the bet to take, Alice might be doing something irrational there (if she didn’t have a bid-ask spread), but we can still talk about dutch books from Bob’s perspective. If you want to eliminate the effect whereby Bob updates on the existence of Alice’s offer before making his decision, then replace Alice with an automated market maker (setup by someone who expects to lose money in exchange for outsourcing the probability estimate). Or assume some natural process with a naturally occurring payoff ratio that isn’t determined by the payoff frequencies nor by anyone’s state of knowledge.
Omega appears and tells you you’ve been randomly selected to have the opportunity to take or leave a randomly chosen bet.
Doctors say “If I’m going to die, please don’t freaking try to keep me alive”. Normal people and doctors agree that they want a peaceful death, but only doctors are truly aware that a peaceful death is often a willing one.
Useful chrome-plugin for learning to do fermi calculations:
http://blog.xkcd.com/2013/05/15/dictionary-of-numbers/
Looks interesting (but unfortunately crashes when attempting to install it on my chrome.)
What is the proper font, spacing, and so forth for a LessWrong article?
The proper formatting for a LessWrong article is the formatting that you see in most LessWrong articles.
The way to achieve that is to use for formatting only the tools provided in the LessWrong editor (except for the button labelled “HTML” — use that only if you absolutely must). If you find it more convenient to write your article in an external editor, make sure that when you copy and paste it across, you copy only the text without the formatting. One way to do that is to prepare the text in an editor that does not support formatting, such as any editor that might be used to type program code.
I’m using a Mac and copying from Pages into TextWrangler and from TextWrangler to the LessWrong editor does not appear to have worked. What might be going wrong?
Also, information both about people’s reaction to nonstandard formatting and how go about getting standard formatting if you’re working from an external editor should really be included in the LessWrong FAQ.
That’s strange. One gotcha is that if you accidentally paste in some formatted text, then delete it, the LW editor remembers the font and line settings at that point and will apply them to any unformatted text that is then pasted in. To make absolutely sure of eliminating all formatting from the editor, select everything and hit backspace twice. The first will delete the text and the second will delete its memory of the formatting. Then paste in the unformatted text, making sure that you actually copied it out of TextWrangler after pasting it in there.
In the last resort, click the HTML button and see if the HTML has anything like:
at the top, and delete it (together with the corresponding
at the end of the text).And of course, save the article as a draft so you can see exactly how it is being formatted before publishing.
Bayes’ theorem written a certain way is surprisingly effective and easy to use in Fermi estimates of population parameters and risks. Unless you are already quite well versed in intuitive Bayes, this is likely of interest.
Hastie & Dawes (2010, p. 108) describe the “Ratio Rule”, a helpful way of writing out Bayes’ theorem that is useful for the quick estimation of an unknown proportion:
Pr(A|B) / Pr(B|A) = Pr(A) / Pr(B)
(Ratio of conditional probabilities equals ratio of unconditional probabilities.)
To steal their example, it’s often reported that most ‘hard’ drug users also use (or started out with) pot, and this is often taken to support the notion that pot is a gateway to hard drugs. Hastie & Dawes point out that for the purposes of evaluating the ‘gateway’ claim, what we really want is not the reported value of Pr(has used pot | has used hard drugs) but rather Pr(has used hard drugs | has used pot) [*]. Suppose that Pr(pot|hard) ~ 0.9. We know that Pr(pot) ~ 0.5 (fraction of Americans who’ve used pot at some point), and we estimate that Pr(hard) is lower by a factor of, say 2.5 − 5, so the ratio Pr(pot) / Pr(hard) is between 2.5 and 5. By the ratio rule, Pr(pot|hard)/Pr(hard|pot) = 2.5 − 5, so Pr(hard|pot) is between about 0.2 and 0.4.
Another example: recently I found that the annual risk of dying by suicide for a young to middle-aged male is about 0.02%, as high as the annual risk of a middle-aged male dying in a car accident (!). I figured that taking into account my not having a history of mental illness should decrease this risk. Googling revealed that Pr(mental illness|suicide) = 0.9, and Pr(mental illness) is between 0.06 and 0.25 depending on the severity criteria you use. I want x = Pr(suicide|not mental illness), so I set up the ratios (assuming that the population proportions can be interpreted as annual risk):
Pr(suicide|not mental illness) / Pr(not mental illness|suicide) = Pr(suicide) / Pr(not mental illness)
x/0.1 = 0.0002/0.75 ~ 0.0002
x ~ 0.00002 (0.002%)
This seems much less worth worrying about. In the plausible range, the precise value of Pr(not mental illness) doesn’t matter much.
[*] What we would really like is Pr(has used hard drugs | started out with pot) but we can assume that the two are close.
A meetup is coming up on July 4th in Tel Aviv. I want to post about it, but I’ve never done a meetup post before. Are there any non-obvious guidelines I should follow? A template? What about the map?
Click the ‘Add Meetup’ button in your user-box thing at the top of the sidebar, and fill in the fields with appropriate info. The map is automatically inserted based on the address you entered and Google Maps.
I never noticed that button. Thanks!
So, the technology is here to grow human pancreas in pigs and potentially improve lives of millions of diabetics. The obstacles are mostly ethical/regulatory, apparently. Does this mean that we explicitly or implicitly value non-creating a donor pig life, however short it might be, over quite a few human QALY?
There aren’t a million type 1 diabetics in the world. I estimate 100k.
Cloned pancreatic transplants are “potentially” useful to the much larger number of type 2 diabetics, but it’s not clear. Really, it’s not even clear how valuable they are to type 1 diabetics—it’s generally believed to be autoimmune, so the new pancreas probably wouldn’t last long.
Yes, of course we should answer these questions by allowing this research, but the pancreas seems a weird choice to start.
PS—the subtitle “We have excellent snack foods” seems like a weird choice to address to a Japanese.
Also, T1D is pretty well managed with modern insulin pumps and blood sensors and finger checks. I have had T1D for 15 years now, and the insulin-related technology and quality of life has really improved over that time. I wouldn’t take a transplant option unless it was grown from my own cells (to negate rejection issues) and the transplant itself had very low failure and complication rates (most of my pancreas is working fine, after all, and I don’t want to risk it).
In current pancreas transplants, you keep your old pancreas, for that very reason. Yes, surgery is bad for your health, but T1D is pretty bad, even if “well-controlled.” If the cloned islets aren’t destroyed in the same way, it’s almost certainly a win. It occurs to me that foreign islets might avoid the autoimmune problem, but it’s only worth it for the worst T1D cases.
Article makes it sound like Japan has unusually strict rules here - - quoting the article, “Japan currently has a ban on what’s called “in vivo” experiments, meaning “within the living.” Essentially, Japanese law forbids experiments that involve a whole, living creature, like these piglets. (“In vitro,” or “within the glass,” is permitted.)”
I think in the US, this would be a relatively easy sell to institutional review boards. It’s legal and widely considered ethical to raise pigs and then eat them. Is there an ethical issue posed here that doesn’t arise for pork?
If it’s OK to eat a pig but throw away the pancreas, then it should be OK to implant the pancreas and eat the rest of the pig.
Although law doesn’t have to be as internally consistent as most ethical systems.
We explicitly value legal regulations over QALY in thousands of situations.
Is there a general way to answer questions like this, which often occur in economics and the social sciences:
“Does institution X play a part in keeping parameter Y stable? It looks like parameter Y has been really stable for awhile now. Is institution X doing a good job, or is it completely useless?”
Well, to go ahead and state the incredibly obvious: in cases where institution X is not equally well-established globally, one thing to look at is variations in X among different nations, geographic regions, populations, etc. (depending on the kind of thing X is). If Y remains equally stable across the board while X varies, that’s evidence that X doesn’t have much to do with Y.
Part of the problem is that changes in X might be aimed at keeping Y stable when some other factor Z varies. See Milton Friedman’s Thermostat:
For example, it’s hard to answer the question “do armies stop invasions?” by using correlations, because the rulers can adjust the strength of the army in response to the risk of getting invaded, so the resulting risk depends mostly on the risk tolerance of the rulers.
Does anyone want to make a small study group to read one of these books at a relatively slow pace?
Causality: Models, Reasoning and Inference
Probability Theory: The Logic of Science (Gelman has been recommended over Jaynes here I’m flexible but I’d rather read Jaynes)
Martin Peterson—An Introduction to Decision Theory
Anything else similar/LW relevant
I’ve been meaning to read these (which I learned about from LW) for a long time and just now have the time.
Causality looks like the best option: the entire first edition is freely avaiable on Pearls site here. There is an overview of 2nd ed. chapters here
I’ve been meaning to read Causality for a long time now: I’d be interested.
Do you have any views on edition 1 vs edition 2? My library doesn’t have ed 2 so I’m wondering whether the differences are important.
Not quite sure what you’re asking: I haven’t read either, so I don’t really have anything to say about the two?
Try: http://lesswrong.com/lw/h5y/lw_study_group_facebook_page/
How is that going?
Also despite it saying one doesn’t need an account, I can’t actually view the facebook page.
Er, well, it’s kinda slowed to a halt recently (I know I’ve been silly-busy) but I plan to try and kick something into motion in about a week. Note: I have said that before. Also, you can look at the workflowy list and contact the people who are looking for people to work alongside on a similar topic.
You can’t get in? Oops. I’ll have a look at editing that… I’ll reply once I’ve done it.
So, I just moved to Europe for two years and finally got finantial independence from my (somewhat) Catholic parents and I want to sign up for cryonics. Is there international coverage? Is there anything I should be aware of? Are there any steps I should be taking?
I was thinking about writing down everything I know. After reflecting a few seconds on that I realized what a daunting task I haveset out to do. Has anyone tried this or a suggestion how I should go about this if at all?
I think you’ll get more concrete suggestions if you explained what you hope to accomplish with this proposed task.
See: How to Make a Complete Map of Every Thought you Think [pdf]
An excerpt from the introduction (tldr: beware of eating yourself):
I’ve settled for (1) keeping a structured list of all books from which I’ve learned something worthwhile, and (2) a log for current ideas (with no more than a few short entries a week). This is sufficient to locate and efficiently relearn most barely-remembered ideas when they become relevant again.
Writing down everything you know seems pretty pointless. Writing down everything you fear forgetting might give you a smaller but more useful list, since it lets you cull out anything you’re in no danger of forgetting (e.g. all the arithmetic facts) as well as anything you wouldn’t care if you forgot (e.g. the vast majority of knowledge in your head).
I actually sort of do this, in a private git repository where (alongside lists of interesting typically-paragraph-sized quotes) I keep lists of interesting (typically-sentence-fragment-sized) topic names. Sometimes the names serve as mnemonics that merely remind me of an interesting fact I once encountered but haven’t thought about recently (e.g. “Rai stones”). Sometimes I’ll skim through the lists and encounter a topic that I’ve completely forgotten about (e.g. “burying the corpse effect”) and I’ll quickly Google to see why I once thought it was so interesting to begin with. A little organization helps. E.g. “burying the corpse effect” was in my “economicsbits” file under the hierarchy “Financial markets, investment”, “Market manipulation”, “Cornering the market”, so it was easy to tailor web searches to lead me to results from economists rather than morticians.
I don’t know if this is useful for anything more than entertainment. TimS had a very good question here.
I started something like this awhile ago. I was trying to write papers for one of my classes and couldn’t find a reference I needed. After about the third time this happened, I figured I ought to make some kind of searchable list of references with summaries about what they contain, and links to the file. I use a google document now, with summaries of books I’ve read and notes from my classes, in addition to references. What I really want is something like workflowy where I can collapse bulleted points. Workflowy would be fine, but I’d be worrying about going over their limit and having to pay for it, since I have a lot of bullet points. In the meantime, I use google docs’ “table of contents” feature so I have that orderly list I want.
I don’t put “everything” in it. My general rule is that it has to be either useful, something I’d likely forget, or something interesting. I also link to everything so I don’t have to search my history.
Writing down only every arithmetic fact you know, assuming you have basic knowledge of math, would, in theory, take an infinite amount of time. In practice, the universe would end first.
Your mind doesn’t have an infinite amount of memory so that can’t be the case. You could use your mind to generate an infinite amount of arithmetic facts, but just recording the knowledge already there would be much faster. And if you are doing this for practical purposes, I imagine you would limit yourself to just relevant or interesting facts or beliefs, and not literally everything.
What do you mean by everything? Surely not literally everything?
The value/work ratio seems pretty low to me. Is this going to help you achieve your goals? If so, how? If not, is it fun enough to be worth it for that reason?
NYTimes blog article: How Carbs Can Trigger Food Cravings.
(If you ever have trouble accessing a NYTimes article (there was a script which doesn’t work anymore), having exceeded your monthly allotment, remember you can just google the title then follow a link from e.g. Google News, which won’t count against your quota.)
My overview page is missing some comments. They do show up on my comments page. This is true of the other accounts I checked too: Eliezer_Yudkowsky, JonahSinick, lukeprog, and orthonormal.
Edit: Looks like it’s fixed. (26 June 2013 07:00:00AM UTC)
Richard Dawkins’ New Strange Video
Now, as far as I’m concerned, he’s preaching to the choir: not only does his message here sound like oversimplified, old news, but the freaky video is a mildly unremarkable instance of Seapunk which is in turn a memetic mutation of many, many things.
Nevertheless, I’m worried that many commenters seem to have decided that this clip was strong evidence that Dawkins has “gone insane”. Don’t they have any sense of humor?
Also, for comparison, a video done by someone who probably is “insane” (or rather, depressed).
Seriously, though, we need to look into the use of the word sane/insane and get a proper convention going, because the way it is now it sounds like “insane” is “someone who follows thought patterns I don’t share”, rather than, say “someone who’s unsuitable for punishment by the judiciary because they can’t be held responsible for their actions because… they don’t have free will?”.
Actually the legal definition is even more confusing...
Just had a gender-balanced LW meetup in Bratislava, 4:4. The small numbers most likely don’t prove much, it could have been a coincidence. But after some discussions we had at LW, I would like to ask—how likely is it to have a gender balance on a LW meetup?
What happened to the “How to have space correctly” article? I can’t find it in either discussion or main.
Edit: Its reappeared with revisions, I must have been trying to access it while the author was editing.
I saw a video link from LW (I think it was from here) in which a man gave a TED-style talk arguing that artists should try to elevate pornography to a serious art form. Does anyone know the video I’m talking about? I’m pretty sure it wasn’t an actual TED talk, but it was structured similarly.
Edit: Found it. It was a talk by Alain De Botton.
This idea seems odd to me. It’s not like it hasn’t been done before, it’s just that if people create highly sexually titillating works with serious artistic merit, people just don’t call it pornography.
A Google for ‘tedx pornography art form’ suggests Cindy Gallop’s “Make Love Not Porn”.
This talk was given by a man. I’m pretty sure it wasn’t actually TED affiliated.
In my google searches I found that video among others, mainly about porn’s detrimental effects.
Is being fashionable optimal? When it comes to fashion, it seems apparent that the main goal is to make an impression on others, so it seems reasonable to conform ones fashion to what other people call fashionable. However, we also know that there are plenty of examples of cases where people’s expressed preferences do not match their actual preferences. Is there a compelling reason to think that expressed fashion preferences are actually those that give the best impression? Does anyone know of studies on this? Does anyone have any anecdotes one way or the other?
(Here I always mean ‘fashionable’ with respect to the intended audience, not with respect to some ‘high art’ sophisticates.)
This question obviously generalizes to a wide variety of status-motivated choices. Does reading ‘fashionable’ books actually make a better impression on someone than reading, say, a well-loved and popular but ‘childish’ book that the audience is likely to have fond memories of? Does driving a nice car give a better impression than driving a car similar to the Jones’s?
fashionable isn’t a two-place word http://lesswrong.com/lw/ro/2place_and_1place_words/ but you can treat it like one if you instead of “fashionable” you use the word “appealing to”. Things that are fashionable are appealing to more people on average but once you realize that you’re optimizing for being appealing to someone you can tune things to whoever that is.
Unless I misunderstand you, then this is what I meant to address with my parenthetical statement. Of course we must take the audience into account. The question is whether the audience’s expressed preferences (ie. ‘fashionable’ meaning doing what’s popular to that audience, not meaning what is most effective wrt that audience) matches what actually gives the best impression.
In the past, I had implicitly assumed that these two coincided (after all, if you want someone to prefer something you should do what they think they prefer, right?) but I just noticed that I the case isn’t closed, so to speak.
I just saw this:
AUV here means Autonomous Underwater Vehicle.
No, I’m not expecting this to FOOM, but I found the language striking, especially with that name. Machines that generate their own missions when you leave them unattended! Machines that continually question their assumptions! Machines that keep going, autonomously, no matter what! Underwater, where it’s hard to find anything that wants to stay hidden!
This sounds like the opening to a science-fiction novel, and not even a bad one.
I was reading Outlawing Anthropics and especially this subconversation has caught my attention. I got some ideas; but that thread is nearly four years old, so I am commenting here instead of there.
My version of the simplified situation: There is an intelligent rational agent (her name is Abby, she is well versed in Bayesian statistics) and there are two urns, each urn containing two marbles. Three of the marbles are green. They are macroscopic, so distinguishable, but not for Abby’s senses. Anyway, Abby can number them to be marbles 1, 2 and 3, she is just unable to “read” the number even on close examination. One marble is red, she can distinguish it, and it gets number 0. One urn has marbles 0 and 2, this is the “even” urn, the second has marbles 1 and 3, and is called odd. Again, Abby cannot distinguish urns without examining marbles. Now, assistent takes both urns to another room, computes 256th binary digit of exp(-1), and gets back with just one urn of corresponding parity. Abby is allowed to draw one marble (turns out it is green) and then urn is taken away and Abby is basically asked to state her subjective probability of the urn being odd (by accepting or refusing some bets). And only then she is told that in another room there is another person (Bart) who is being presented with the same choices after drawing the other marble from the very same urn. And finally, Abby is asked (informally) what is her averaged expectation of Bart’s subjective probability of the urn being odd (now that she sees her marble is green). And, if this average is different from her subjective probability, why is she not taking that value as an indirect evidence in her calculations (which clearly means that the assistent is just messing with her).
Assumptions are that neither Abby nor Bart have a clue about binary digits of exp(-1), they are not able to compute that far and so they assign prior probability of the urn being odd to 50%. Another assumption is that Abby and Bart both have chosen their marbles randomly, and in fact, they do not even know which one of them was drawing first. So there are 4 “possible” worlds, numbered by the marble Abby “would” have drawn, all of them appearing equally probable before the marble drawing.
Question is (of course) what subjective probability should Abby use when accepting/refusing bets. And to give a witty retort to assistant’s “why” question, where applicable; or else, to explain why Boltzmann brains are not that big obstacle to rationality.
And here I am, way over my time budget, having finished around one third of my planned comment. So I guess I shall leave you with questions for now, and I will resume commenting later.
Edit: Note to self: Do not forget to include http:// in links. RTFM.
Edit: “possible” worlds, numbered by marble Abby has drawn → “possible” worlds, numbered by marble Abby “would” have drawn
Somewhere I think there’s a quote about Eliezer writing faster with a writing partner. Could someone point me to this quote?
Might have been this http://lesswrong.com/lw/gwo/coworking_collaboration_to_combat_akrasia/
Odd brain exercise I find entertaining: Using only knowledge about this universe, try to determine what kind of universe would be most likely to simulate our kind of universe.
For example, to my eye, general relativity, plus the propensity of pi to pop up in odd places, implies something like a hierarchy-relative polar coordinate system is the standard mathematical model in our host universe, as opposed to the Cartesian coordinate system we tend to default to. So, what would a universe look like such that this is the most intuitive way of considering data? Seems most likely that such a view is most likely to arise in a nonoriented universe; gravity would be unlikely, as gravity provides a natural plane of orientation, so something like universal attractive and repulsive forces probably wouldn’t exist.
WHY PEOPLE REALLY HAVE DOGS
People really have dogs so they can talk to themselves
without feeling crazy. Take me, for example, cooking
scrambled eggs, ranting about this dumb fuck
who sent naked pictures of himself to strange women,
a politician from New York, I read about it in the paper,
start telling my nervous cock-a-poo, blind in one eye,
practically deaf (so I have to talk extra loud) all about it
and he’s looking at me, poor thing, like he thinks I’m
the smartest person he’s ever heard and I go on, him
tilting his head, and when he sees me pick up my dish
of eggs he starts panting and wagging his tail, I tell him,
no, they’re not for you, but then I break down and give
him some knowing full well that feeding from the table
is rule number one of what you don’t do with dogs,
but I do it anyway because he wants them so bad,
because it makes me feel good to give him what he wants,
and I expound more to make sure he’s aware of the whole
political scandal, the implications for the democrats,
the hypocrisy, tell him dogs are rarely hypocrites, except
when they pretend to be interested in you when all they want
is your food, take him, for example, right now pretending
to love me so much when all he wants are my eggs, me
talking to him when all I want is to say my opinions with no one
interrupting, feel my voice roll out on a clear Saturday morning,
listen to it echo from the kitchen to the bath, up through the ceiling,
out to the sky, the voice from within, all alone in the morning
as the light outside catches the edge of the silver mixing bowl
where the remaining, uncooked eggs sit stirred, ready to toss
into the pan, cooked, eaten by whomever pretends to want them.
Kim Dower
All right, since I was told on my very first, and abortive, discussion thread that I should post a larger summary or excerpt of the link I had on there if I wanted to comport with LW’s norms, let me do that here instead (since my karma is now too low to make another discussion post).
So I’ve written a long article summarizing a life philosophy which asserts the significance of a certain kind of meditative self-expression for grasping human freedom and understanding the significance of pain and suffering in human life.
Any LessWrong readers interested in thinking about the meaning of life, meditation, psychology, philosophy, spirituality, art, or in better understanding and handling their own minds should be interested.
Here is the largest excerpt I could post without the comment being rejected as too long:
The next time you stub your toe or otherwise hurt yourself, take a moment to become curious about exactly what the pain is like. What exactly does it feel like? Is it stabbing? Does it radiate? Is it blunt or sharp? Does it come and go? Is it cold or hot? Does it remind you of someone, or something, or some place?
As soon as you suspend the pain in your mind, the pain immediately changes. It becomes interesting. Like Keanu Reeves might stop a bullet in the air in The Matrix, you stun the pain by paying it conscious attention and then examining it like a scientist or artist might. It becomes fascinating. And then, as you describe it, its character changes more and more. It becomes sharp, specific, and beautiful. It might still be pain, but still, even as pain, it is no longer painful in the same way. Now it is a jewel. You see within it organization, ideas, intelligence.
Through the process of reflection and then expression, we can transform pain into beauty. This is true not just of physical pain, but of all pain, and indeed, of any experience. This is the essence of human freedom and power.
The most interesting and fundamental question in the world is what we’re doing here in this life. What’s the point? I spent years thinking about this question — going through psychology and western and eastern philosophy, and asking this question over and over.
I think I have an answer, at least a certain kind of partial answer. It’s certainly not totally original. Yet it is not often seen, not often heard.
My problem is how to explain it in words. I have tried many formulations on paper and in my head and none of them seem quite right. So I’ve decided to share several of them with you here, and hope you get the point. I’m trying to indicate a sensibility about the world — a way of relating to it, of seeing it, of dealing with it. What I’m trying to say cannot be wholly communicated in words (though can anything?). I need you to get the feel of it, to have the shift in perspective without which none of this will really make sense.
There’s a zen story about the monk who points to the moon. And the disciple keeps looking at his finger. ‘No, no, up there!’ the monk tries to say, but the disciple cannot understand the concept of pointing. That’s the kind of barrier I feel I’m up against.
Let me give you another example: to someone new to wine, wine tastes like wine. Maybe red wine tastes like red wine, and white wine tastes like white wine. To someone who drinks a little more, and thinks about what they drink, perhaps they start to identify sour and bitter, dryness and acidity. But to the wine connoisseur and critic, the vocabulary and the experience expand. They start to be able to detect and name notes of musk, florality, and minerality. They distinguish the taste of the wine at the front of the palate from the taste at the back. They comprehend the history and the heritage of the wine, its lineage in the soil, the effect of the sun and the rain on the grapes that made it. They taste and appreciate the various nuances of fermentation.
For the connoisseur, the wine unfolds into a much more complex, in-depth experience. It happens not just because the person drinks a lot of wine, but because they pay attention, and because they analyze the wine, and come up with labels, and break down and express their experiences.
The same way the experience of the wine reveals layer after layer with increased attention and thought, so too can the same general idea apply to life. Any particular experience you’ve had without thinking about it you’ve barely even lived. It passed by and vanished, and you missed a lot in it, a lot like a rookie misses almost everything interesting in the wine she’s tasting. If you take an experience of yours, pay it attention, and express what it is like, you will find that the experience starts to refine itself. It becomes complex, multi-layered, rich, fascinating, interesting, beautiful. It ceases to be one big blob and starts to become a multitude.
This revelation of layers of intelligence, of pattern but also of chaos—interesting chaos—is the reward for this expression.
Expression is the key.
Mere observation is not enough. Simply remembering an experience is not enough. You just remember the same pale, shallow memory you had before. But if you remember an experience and then 1) think deeply about it, 2) try to honestly and originally express exactly what it was like for you, and 3) put this expression in some form (music, poetry, film, or even just a few sentences in your journal) then 4) that will allow you to see the experience in a new light. It will force you to choose the important aspects of the experience. Those aspects of the experience will come into focus. Like a near-sighted man putting on glasses for the first time, the experience will become dramatically sharper.
Of course, expression inevitably distorts. Even a good map is partly wrong. It is still illuminating. A map has to distort and simplify to be useful. Similarly, every expression breaks down experience in a way that is partly wrong. One kind of expression will highlight certain facets of an experience; another expression will highlight other facets. Experiences can be expressed in an endless variety of ways.
This sensibility I’m trying to communicate results in the appreciation of “subtlety.” To a casual listener of music, when someone plays a key on a piano, they hear it as a single note. To a musician or a music critic or an audiophile, though, the note has at least three parts. The first is the attack, when the key is struck and a tiny hammer literally pings against a tight string inside the piano. The second is a middle portion of the note. The third is the decay, as the note fades. Each of these is different. And in fact even within each of these parts the note changes. Expression is like an instrument that allows you to see the worlds within every world.
Observing and expressing any experience streams down beautiful ideas that allow you to see it in a new way. The experience discloses connections to other experiences, patterns within it, intelligence.
To appreciate the finer and finer details of these changes — to see distinctions and discern refinement where once there was only sameness — is the spirit of subtlety. It is to see not just a thing but the presence of the space surrounding that thing. It is the spirit of the Japanese tea ceremony.
It is the spirit of not trying to overwhelm with a simple rush of pleasure, but to see deeper and deeper and quieter and quieter parts of something. It is why John Cage created an entirely silent piece — he wanted to make a statement about this spirit of subtlety, that looks for the shyest and most reluctant details. It is the spirit of, when you’re hungry, not just gobbling up food, but making food that tastes good, and then, looks good. Taking your time to do that prolongs the hunger, but then allows you to explore that hunger in a more and more elegant, artistic way.
The Magic Equation: Desire = Pain
And if you want to see these subtleties, desire is crucial. You do not fully control your mind anymore than you fully control the weather. You are at all times in a mental landscape, and the most important feature of this landscape is desire. We always want something — or perhaps to avoid something — and this focuses and defines our attention. We can use this desire as the starting point of our attention and expression.
Desire, which unfulfilled is the same thing as pain, is what allows you to appreciate anything. So the connoisseur realizes that desire is a precious thing. It should not be used up too early. It is what allows you to be interested in something. As soon as you’ve had an orgasm, interest in sex decreases. It is the desire for sex, the ache, the hunger, that can motivate you to explore subtler and subtler realms of sexuality: to be interested in those realms. And that is why celibacy shows up so often in the world’s mystical traditions. It provides the motivation to seek sexuality not in physical bodies, but in knowledge and contemplation. The subtlest sexual objects are ideas.
The artistic mindset I am trying to communicate sees emotional or physical pain — unfulfilled desire — as a precious, specific energy that we can capture like a firefly in a jar, to follow its spirals and whirls. We can use it by investigating the desire itself. The desire is an experience. We can attend to it, note its intricacies.
Read the rest of it...
This may be overly harsh, but:
This essay is nonsense. There’s an easy trick for analyzing writing like this: As you read, mentally remove all of the emotionally charged words and connotations and see if the argument still makes sense. When we get rid of all the flowery language here, we end up with (admittedly uncharitable) things like, “Humans can think about pain and other experiences and use these thoughts to create art that others find pleasurable” and “By paying close attention, you can gain more understanding of complex things (e.g. wine tasting).” None of this analysis even mentions the actual, causal reasons human beings suffer, or established theories about coping with suffering and creativity. As a result, I don’t see anything particularly insightful or useful.
I’m not sure what essay you read. Even my very first paragraph doesn’t fit into this framework.
Assuming what you said is true, can you give a concrete example in one sentence what I should choose differently than I do now?
For example, I would draw from my experience as a lawyer to say:
Absolutely. To start with, I give a simple concrete suggestion in the first paragraph above about how to deal with physical pain.
Another concrete suggestion might be: any time you feel annoyed or angry, express in words exactly what the annoyance or anger is like, using metaphors, and going back and forth between your words and your experience to make sure you’ve captured the experience in as accurate and original—or non-cliched—a way as you possibly can.
A broader way to say the same thing might be: focus on those experiences that cause you emotional disturbance and express them, as accurately and as originally as possible, into an artistic medium of your choice (words, music, painting, whatever), using metaphors appropriate to that medium to convey what your experience is like.
If you do that, my contention is that you will find that your negative experiences bear within them a wealth of beauty.
There’s more to it than that, but those are a couple of concrete suggestions.
Constructive suggestion: Write more like this, less like what you posted about.
Substantively, I think one could substitute any emotion or sensation and get the same advice. Thus:
Which I expect is true. But pain is generally no fun, and it isn’t clear that you think avoiding pain is worth the effort.
When I stub my toe, I’m not doing something wrong by first choosing to figure out why I stubbed my toe and what to change to avoid that in the future. And once I’ve done that, I’m not sure I have time to do what you suggested.
The reason I write like I do above is that I’m giving a philosophical vision, not a series of concrete suggestions. I’m trying to explain why suffering exists generally, and how humans have freedom not just in spite of—but because of suffering.
Of course you are right that you can express anything and possibly enhance it. But I don’t necessarily think that self-expression always alleviates pain, and I don’t think enhancing positive experiences is its point either.
What I think it does is something different. It opens up an aesthetic dimension of experience.
Let me give you another example. You watch a sad movie and are totally absorbed by it. Then you remember that it’s just a movie. And you start to think about and notice the acting, the cinematography, the set design, the costumes, the writing.
Now that doesn’t make it any less of a sad movie. It remains tragic. But it opens up aesthetic aspects of the movie-going experience.
Or to take your example of sex. Thinking closely about experience of “good sex” might actually reveal it to be, upon thought, not so good sex. So your memory of it, once you become more critical, might actually become more negative. So it need not enhance your experience.
What it will do, regardless, is deepen the aesthetic facet of the experience, deepen your appreciation of the complexities of it.
In this venue, philosophical vision that doesn’t have implications for personal choice and behavior is not valued, which might partially explain your prior negative reception.
That said, I’m not sure you need concede that you have no suggestions for folks. You seem to be suggesting that “deeping appreciation of the complexity of experience” is something worth doing, and you have some thoughts about how to do that.
Have you heard of Focusing? It’s a psychological system based on that premise.
Yes, focusing would be related, and it certainly seems like an excellent technique, but it’s not quite the same thing. Focusing is particularly oriented towards bodily sensations, whereas I’m talking about experiences in broader terms, including but not limited to the body. Focusing is also a bit more passive (waiting for thoughts to occur to you) and less oriented towards art & expression. Focusing is also more oriented towards words, whereas I talk more broadly about other means of expresion. And of course the underlying philosophical frameworks are also different.
What you suggest has the benefit of improving one’s eloquence and accuracy in conveying experience.
What you suggest can turn the unproductive to the refreshingly inspired productive.
These suggestions need no philosophical support, lest another challenge the assumption they are inherently desirable. Simplicity of expression carries with it persuasion, for the reader can decide themselves whether they want the effects; pre-emptive arguments can turn them away.
I have flouted this advice almost every time I installed software or signed up to a website over the last couple decades, and AFAICT I have never had much trouble as a result.
upvoted for politeness. Still didn’t want to read more than a couple paragraphs due to craziness. Sorry.
I appreciate the politeness.
For what it’s worth, I did not detect any craziness in the first section of the essay.
This is reminding me of the Enneagram. The idea is that people have basic habitual ways of relating to the universe—all the standard ways (the Enneagram has nine of them) are useful but incomplete, and all of them can go bad or be refined into something very valuable.
Accurate perception is important, but so is action.
20 Signs You Studied Philosophy In College