I was chatting with Toby Ord recently about a series of events we think we’ve observed in ourselves:
In analyzing a how the future might develop, we set aside some possible scenarios for various reasons: they’re harder to analyze, or they’re better outsourced to some domain expert at Cambridge, or whatever.
Some time passes, and we forget why we set aside those scenarios.
Implicitly, our brains start to think that we set aside those scenarios because they were low probability, which leads us to intuitively place too much probability mass on the scenarios we did choose to analyze.
I wish I had an intuitive name for this, which made analogy to some similar process, ala “evaporative cooling of group beliefs.” But the best I’ve heard so far is the “pruning effect.”
It may not be a special effect, anyway, but just a particular version of the effect whereby a scenario you spend a lot of time thinking about feels intuitively more probable than it should.
I had actually been wondering about this recently. People define a psychopath as someone with no empathy, and then jump to “therefore, they have no morals.” But it doesn’t seem impossible to value something or someone as a terminal value without empathizing with them. I don’t see why you couldn’t even be a psychopath and an extreme rational altruist, though you might not enjoy it.
Is the word “psychopath” being used two different ways (meaning a non-empathic person and meaning a complete monster), or am I missing a connection that makes these the same thing?
You don’t notice someone has no empathy until you see them behaving horribly. The word is being used technically to refer to a non-empathic person, but people assume that all non-empaths behave horribly because (with rare exceptions like this neuroscientist) all the visible ones do.
I hadn’t realized it before, but the usual take on non-empathic people—that they will treat other people very badly—implies that most people think that mistreating people is a very strong temptation and/or reliably useful.
The only acquaintance I’ve had who was obviously non-empathic appeared to be quite amused by harming people, and he’d talk coldly about how it would be more convenient for him if his parents were dead. If I were a non-empathic person who’d chosen a strategy of following the rules to blend into society, I would find it very inconvenient for people to think I was anything like him, and would therefore attempt to emulate empathy under most conditions. Who would want to cooperate with me in a mutually profitable endeavor if they thought I was the kind of person who would find it funny to pour acetone on their pants and then light it on fire? Having people shudder when they think of me would be a disadvantage in many careers.
This creates a good correlation between visible non-empathy and mistreating people without requiring a belief that mistreating people is generally enjoyable or useful.
Killing people in a computer game is fun for many people.
Without empathy, anything you do with other people is pretty much a game. Finding a way to abuse a person without being punished for it, is like solving a puzzle. One could move to more horrible acts simply as a matter of curiosity; just like a person who completed a puzzle wants to try a more difficult puzzle.
(By the way, this discussion partially assumes that psychopaths are exactly like neurotypical people, just minus the empathy. Which may be wrong. Which may make some of our conclusions wrong.)
One of the principles of interesting computer games is that sometimes a simple action by the player leads to a lot of response from the game. This has an obvious application to why hurting people might be fun.
Without empathy, anything you do with other people is pretty much a game.
No it isn’t. Why don’t you try to crawl out of your typical mind space for a moment?
Killing people in a computer game is fun for many people.
That’s because it usually has good consequences for the player, the violence is cartoony, and NPCs don’t really suffer. You could be an incredibly unempathethic person, and still not find hurting real people fun even in the gut level because it has so many other downsides than your mirror neurons firing.
I myself possess very little affective empathy, and find people suggesting that I therefore should be a sadist pretty insulting (and unempathetic). I’m also a doctor, so you people should tremble in fear for my patients :)
(By the way, this discussion partially assumes that psychopaths are exactly like neurotypical people, just minus the empathy. Which may be wrong. Which may make some of our conclusions wrong.)
Well yes, it’s clearly fun for at least some people. It’s just that the observations do not require anyone to think that mistreating people is strongly tempting for many, most, or all people, which is how I read your comment above.
If I were a non-empathic person who’d chosen a strategy of following the rules to blend into society, I would find it very inconvenient for people to think I was anything like him, and would therefore attempt to emulate empathy under most conditions.
That’s exactly how I approach the situation. I find the claim that I can’t be moral without empathy just as ridiculous as you would find the claim that you can’t be moral without believing in god. I also find moral philosophies that depend on either of them reprehensible. Claiming moral superiority because of thoughs or affects that are easy to feign is just utter status grabbing in my book.
Imagine that you find $1000 on the street. How much would you feel tempted to take it?
Imagine that you meet a person who has $1000 in their pocket. Assuming that you feel absolutely no empathy, how much would you feel tempted to kill the person and take their money? Let’s assume that you believe there is almost zero chance someone would connect you with the crime—either because you are in a highly anonymous situation, or because you are simply too bad at estimating risk.
Not very tempted, actually. In this hypothetical, since I’m not feeling empathy the murder wouldn’t make me feel bad and I get money. But who says I have to decide based on how stuff makes me feel?
I might feel absolutely nothing for this stranger and still think “Having the money would be nice, but I guess that would lower net utility. I’ll forego the money because utilitarianism says so.” That’s pretty much exactly what I think when donating to the AMF, and I don’t see why a psychopath couldn’t have that same thought.
I guess the question I’m getting at is, can you care about someone else and their utility function without feeling empathy for them? I think you can, and saying you can’t just boils down to saying that ethics are determined by emotions.
I guess the question I’m getting at is, can you care about someone else and their utility function without feeling empathy for them? I think you can, and saying you can’t just boils down to saying that ethics are determined by emotions.
I think that ethics, as it actually happens in human brains, are determined by emotions. What causes you to be an utilitarian?
There’s more to it than that. How about upbringing and conditioning? Sure, it made you feel emotions in the past, but it probably has a huge impact on your current behaviour although it might not make you feel emotions now.
Nitpick: I’ve seen a distinction between affective empathy (automatically feeling what other people feel) and cognitive empathy (understanding what other people feel), where the former is what psychopaths are assumed to lack.
In practice, caring without affective empathy isn’t intuitive and does take effort, but that’s how I view the whole effective altruism/”separating warm fuzzies from utilons” notion. You don’t get any warm empathic fuzzy feelings from helping people you can’t see, but some of us do it anyway.
This is a valid point and it actually makes my statement stronger. Simply understanding what people like/dislike may not be considered ‘true empathy’, but caring about what they like/dislike certainly is.
If I make chicken soup for my friend when he’s sick, and then I feel good because I can see I’ve made him happy, that’s empathy. If I give $100 to a charity that helps someone I will never see, that’s not empathy. The reward there isn’t “I see someone happy and I feel their joy as my own.” It’s knowing abstractly that I’ve done the right thing. I’ve done both, and the emotional aspects have virtually nothing in common.
All forms of empathy must necessarily be indirect. When you see your friend happy, you don’t directly percieve his happiness. Instead, you pick up on cues like facial expression and movements. You extract features that correspond to your mental model of human happiness. Let me make this clear and explain why it’s relevant to the discussion.
Let’s say your friend is asleep. You make him friend chicken soup, leave it on the table, and go to work. He later sends you a single text, “Thanks, the chicken soup made me really happy.” This puts a smile on your face. I’m pretty sure you would consider that the first form of empathy, even though you never saw your friend happy. Indeed, the only indication of his happiness is several characters on a phone display.
Now let’s take this further. Let’s say every time you make your friend chicken soup it makes him happy, so that you can predict with confidence that making him chicken soup will always make him happy. Next time you make him chicken soup, do you even need to see him or get a text from him? No, you already know it’s making him happy. Is this type of empathy the first kind or the second kind?
I’d call it the first kind, because it actually causes warm-fuzzy-happy feelings in me. My emotion reflects the emotion I reasonably believe my friend is feeling. Whereas the satisfaction in knowing I have done the right thing for someone far away whom I don’t know and will never meet is qualitatively more like my satisfaction in knowing that my shoes are tied symmetrically, or that the document I have just written is free of misspellings. I’ve done The Right Thing, and that’s good in an abstract aesthetic way, but none of my feelings reflect those I would believe, on reflection, that the recipient of the good deed would now be feeling. It doesn’t put a smile on my face the way helping my friend does.
Well, what you say you feel is subjective (as is what I say I feel) but when I personally donate to charity it’s because helping people—even if I don’t directly see the results of my help—makes me happy. If not the ‘warm fuzzy feeling’, at least a feeling comparable to that of helping my friend. That is my subective feeling.
Nah, you can care about someones utility function instrumentally. In fact I think that’s the way most people care about it most of the time, and have no reliable evidence to suggest otherwise.
I meant ‘caring’ as in direct influence of their utility on your utility (or, at least, the perception of their utility on your utility), conditionally independent of what their utility results in. If you take ‘care’ to simply mean ‘caring about the outcomes’ then yes you’re right. Saying that all people are that way seems quite a strong statement, on par with declaring all humans to be psychopaths.
I don’t see why a psychopath couldn’t have that same thought
They could. But if you select a random psychopath from the whole population, what is the probability of choosing an utilitarian?
To be afraid of non-empathic people, you don’t have to believe that all of them, without an exception, would harm you for their trivial gain. Just that many of them would.
To be afraid of non-empathic people, you don’t have to believe that all of them, without an exception, would harm you for their trivial gain. Just that many of them would.
You would also have to know in what proportion they exist to know that, and you don’t have that information precisely because of such presumptions. You wouldn’t even know what’s normal if displaying certain qualities is useful enough, and detecting whether people really have them isn’t reliable enough.
It’s possible to steelman that hypothetical to the threshold that yeah, killing someone for their money would be tempting. It wouldn’t have much resemblance to real life after that however.
There are several other reasons not to kill someone for their money than empathy, so I’m not sure how your hypothetical illustrates anything relevant.
implies that most people think that mistreating people is a very strong temptation and/or reliably useful
This does seem to be a common assumption—I remember being very confused as a teenager when people said that something I was doing was morally wrong, when the thing didn’t actually benefit me. (My memory is fuzzy, but I’m pretty sure this was family members getting frustrated with the way I acted when depressed.)
Conversely, I used to assume that having empathy implied treating others well—that all people who were especially empathetic also wanted to be nice to people.
The nearest term used in contemporary psychiatry is antisocial personality disorder. AFAIK some forensic psychiatrists use the term psychopath, but the criteria are not clear and it’s not a recognized diagnosis. Forget about the press the term gets.
Lack of empathy certainly isn’t sufficient for either label, and can be caused by other psychiatric conditions.
I don’t understand that connection you made. Care to explain?
You can’t determine someone is a psychopath via a brain scan yet. You can’t even determine someone has Alzheimer’s with only a brain scan, even though it’s pretty well understood which brain regions are damaged in the disease. Psychopathy is a syndrome, and still quite poorly understood. Note also that there would be significant problems with testing if he went to a psychiatrist after knowing his scan results.
I think that neuroscientist is just trying to make money by claiming he’s a psychopath, which of course would be quite a psychopathic thing to do :)
It struck me as relevant to the philosophical question: here’s someone who has had to think hard about “what makes a psychopath or sociopath?” He is in his social actions a reasonably normal and productive citizen, but worries about how much of a dick he can be, including potentially.
Account by someone who’s highly dependent on prescription hormones to function, some description of the difficulties of finding a doctor who was willing to adjust the hormones properly, a little about the emotional effects of the hormones, and a plea to make hormones reliably available. It sounds like they’re almost as hard to get reliably as pain meds.
Society, and even some aspects of our medical system, are fond of the naturalistic fallacy. I wonder, if we have this much trouble in situations where the treatment simply returns someone to the base rate, how much of a reaction is there going to be in a few decades when the more directly transhuman modifications start coming online?
Returning to the base rate is cheaper and more egalitarian, so what seems like naturalistic fallacy necessarily isn’t. In a completely private health care system these things wouldn’t matter though.
I don’t get it. Why would she need to deal with dozens of people to get meds she clearly needs for a single condition? My heuristics point at her version of the story leaving out something important.
Alternatively you really do have a screwed up health care system in the US.
Why would she need to deal with dozens of people to get meds she clearly needs for a single condition?
Most of these drugs are pretty well-known. Hydrocortisone (or prednisone) is probably the most common and easiest drug, both cheap and commonly prescribed as a general immunosuppressant. The thyroid drugs (probably thyroxine and liothyronine) have a number of on-label uses that could be coherently stretched to cover this particular condition, and are common enough to be in the average pharmacy network. There’ll be some hesitancy to mess with doses heavily—especially after you achieve basic functioning—because of the high risk of adrenal shock, something that the author experienced in at least one high-profile incidence. In women, a combination of estrogen-progesterone therapy is recommended, and not that dissimilar from the Pill except opposite in effect.
But that’s not a dozen drugs, and that’s about the full scale of well-documented treatment. There’s not much literature on the use of testosterone in women, for example, and I can think of a half-dozen neurochemicals she might be pioneering. There are endocrinologists that enjoy working at the frontier of drug discovery. There aren’t a huge number that do so, but have patients that walk on two legs and are known for food preferences other than cheese.
There are also secondary issues. The drug industry has some severe logistics issues, resulting in many drug shortages. One of the most common thyrioxine supplements has been on back-order since, and is scheduled to stay that way til 2014 after a rather goofy recall. This isn’t unique to hormones (although the levothyroxine example is especially ridiculous), but it matters.
That’s true, I missed the sentence about a dozen drugs. Keep in mind though she might not take all of them exclusively for that particular condition.
I can think of a half-dozen neurochemicals she might be pioneering
I would be interested if you named a few, and whether there’s any evidence of their usefulness.
I can think of a half-dozen neurochemicals she might be pioneering.
If that’s the case the question becomes should she really be allowed to do that. I have no problem with that if the system allows for the patient being completely responsible for taking those drugs, but I don’t think any doctor or insurance company should be expected to take the fall for her. If the drug isn’t well documented and she doesn’t take part in a trial, I think she should finance treatment for any complications herself, and that could easily get more expensive than she can afford.
Her complaints are not the usual ones about the American system. What leads you to believe that it would be different elsewhere? The insurance companies add a gatekeeper, but it’s not the only one.
If she complained that the system simply denied her what she wanted, I’d find it quite plausible that there is more to the story. But instead she says that it is a struggle every month. What could be the other side to that story?
Her complaints are not the usual ones about the American system.
I only have a really vague idea of what those are.
What leads you to believe that it would be different elsewhere?
I’m a doctor from Finland, and don’t think she would have similar problems here, if she really needed those hormones. Does something lead you to believe it wouldn’t be different elsewhere?
But instead she says that it is a struggle every month.
That’s what I find weird too. I can’t imagine why that would happen, at least in my country. I’ve never even heard of the kinds of complaints she has.
What could be the other side to that story?
I don’t know, I’m not an endocrinologist. A wild guess amongst many: doctors really do have good reasons to believe she doesn’t need the amounts of hormones that she thinks she does (placebo), or even that they’re harmful. She could for example have a psychiatric diagnosis or lots of previous pointless visits recorded, which would strengthen this suspicion further. She would have to do doctor shopping to get her prescriptions renewed, which would make it a constant struggle.
I also find it weird the diagnosis took years when she had that severe symptoms.
I haven’t had much dealings with the medical profession, but her story didn’t seem wildly implausible to me—I have friends who’ve had a hard time, though not that bad, getting the care they need.
Anyone else care to weigh in on whether her story seemed implausible to them, or plausible but at the low quality medical care end of the spectrum?
From my POV, patients quite commonly have misconceptions about what conditions they have, what lead to their diagnosis, what treatments they’re getting or why and especially why they’re denied treatment. She seems reasonably intelligent and educated, so this is a bit less probable.
low quality medical care end of the spectrum?
By the system I would mean the whole system consisting of communication between doctors and their colleagues and pharmacies, continuity of electronic patient histories, insurances, reimbursements and prescription policies etc. I’m saying that the problem could be in some of those parts, just to be clear.
What I find more interesting is how she had to explore hormone space to find the combination that was her. Reminds me of the experiences of some transgender people I know.
Alexander is a disciple of the equally humorless “rationalist” movement Less Wrong, a sort of Internet update of Robespierre’s good old Cult of Reason, Lenin’s very rational Museums of Atheism, etc, etc. If you want my opinion on this subject, it is that—alas—there is no way of becoming reasonable, other than to be reasonable. Reason is wisdom. There is no formula for wisdom—and of all unwise beliefs, the belief that wisdom can be reduced to a formula, a prayer chant, a mantra, whatever, is the most ridiculous.
Out of interest, does anyone here have a positive unpacking of “wisdom” that makes it a useful concept, as opposed to “getting people to do what you want by sounding like an idealised parental figure”?
Is it simply “having built up a large cache of actually useful responses”?
“Wise” and “smart” are both ways of saying someone knows what to do. The difference is that “wise” means one has a high average outcome across all situations, and “smart” means one does spectacularly well in a few. That is, if you had a graph in which the x axis represented situations and the y axis the outcome, the graph of the wise person would be high overall, and the graph of the smart person would have high peaks.
Wisdom seems to be basically successful pattern matching of mental concepts to situations, and you need life experience as the training data for mental concepts, the varieties of situations, and the outcomes of applying different concepts to different situations to get it running at the sort of intuitive level you need it.
I think Moldbug is somewhat on target, LW doesn’t really have much in the way of either explicitly cultivating or effectively identifying the sort of wisdom that lets you produce high-quality original content, beyond the age-old way of hanging around with people who somehow can already do it and hoping that some of it rubs off. So we get people adopting the community opinions and jargon, getting upvotes for being good little redditors, not doing much else, and thinking that they are gaining rationality. We haven’t managed to get the martial art of rationality thing going, where there would be a system in place for getting unambiguous feedback on your actual strength of wisdom.
Prediction markets are one interesting candidate for a mechanism for trying to measure the actual strength of rationality.
In this case he could not be farther off target if he tried. Yvain’s writings are some of the best, most engaging, most charitable and most reasonable anywhere online. This is widely acknowledged even by those who disagree with him.
When I was very young, I had a funny idea about layers of information packing.
“Data” is raw, unfiltered sensory perception (where “senses” include instruments/etc.)
“Information” is data, processed and organized into a particular methodology.
“Knowledge” is information processed, organized and correlated into a particular context.
“Intelligence” is knowledge processed, organized and correlated into a particular praxis.
“Wisdom” is intelligence processed, organized and correlated into a particular goalset.
“Enlightenment” is wisdom processed, organized and correlated into a particular worldview.
I never rigorously defined what the process was to my own satisfaction, but there seemed to my young mind to be an isomorphic ‘level-jumping’ process between each layer that involved processing, organizing and correlating one’s understanding at the previous layer.
In my own head, I mostly unpack “smart” as being able to effectively reason with a given set of data, and “wise” as habitually treating all my observations as data to reason from. Someone with a highly compartmentalized mind can be smart, but not wise. If (A → B) but A is not actually true, someone who is smart but not wise will answer B given A where someone wise will reject A given A.
That said, this seems to be an entirely ideosyncratic mapping, and I don’t expect anyone else to use it.
Cult accusations, criticism by way of comparison to things one doesn’t like simply because they bear similar names, use of ill-defined terms as part of that criticism, bizarre analogies to minimally historically connected individuals (Shabtai Tzvi? Seriously? Also does Moldbug realize what that one particularly sounds like given Eliezer’s background?), phrasing things in terms of conflicts of power rather than in terms of what might actually be true, operating under the strong presumption that people who disagree with one are primarily motivated by ulterior motivations rather than their stated ones especially when those ulterior motivations would support one’s narrative.
Starting today, Monday 25 November 2013, some Stoic philosophers are running “Stoic Week”, a week-long mass-participation “experiment” in Stoic philosophy and whether Stoic exercises make you happier.
Recently, a planetary system similar to our own solar system was found. This is one of the first cases where one has rocky planets near the star and large gas giants farther away, like our own solar system. Unlike our system, this system apparently has everyone fairly close in, with all the planets closer to the star than the Earth is to the sun.
In case anyone’s wondering, it looks as if the star is comparable in size and luminosity to ours, so this system probably isn’t any more hospitable to Earth-like life than our solar system would be if all the planets were moved much closer in.
You know I really do feel like I am clinging bitterly to my priors and my meta at this point as I joked on twitter recnetly. I knew this was inevitable should our presence ever be noticed by anyone actually important like a journalist. What I didn’t know was it would still hurt.
You shouldn’t be upset by the initial media coverage, and I say this as someone who doesn’t identify with neo-reactionary thought. Attacking new social movements is NOT inevitable. It is a sign of growth and a source of new adherents. Many social movements never pick up enough steam to receive negative coverage, and those movements are ineffective. Lots of people who have never heard of neo-reactionaries will read this article, note that parts of it are pretty obvious bullshit (even the parts that are intended to be most negative; lots of people privately believe that IQ and race are connected even if they are publicly unwilling to say anything of the sort), and follow the links out of interest. There are many very smart people that read TechCrunch, and don’t automatically agree with a journalist just because they read an article. Obviously this is bad for Peter Thiel, who is basically just collateral damage, but it’s most definitely good for neo-reactionaries.
Gandhi’s famous quote (“First they ignore you, then they laugh at you, then they fight you, then you win.”) is accurate as to the stages that a movement needs to pass through, although obviously one can be stopped at any given stage. I think we are already seeing these stages play out in the Men’s Rights movement, which is further along the curve than neo-reaction.
Clinging bitterly to your priors and your metta sounds like a sign you should update, and that’s more important than deleting or not deleting a blog comment.
As for your comment, first two paragraphs are fine, perhaps even providing helpful clarification. The sarcasm in the second paragraph is probably unhelpful, though, maybe just edit the comment.
I don’t think you should. But maybe this is because I feel the same way (;_;) despite being just someone who endorses HBD and dislikes Progressivism but thinks Moldbug wrong. I like this comment you made elsewhere much better than the one you linked to though:
Progressive takeover of a community is strongly empowered by a journalists noticing nonprogressive ideas floating there.
We’ve been noticing this process for a long time now. I now think I was wrong on this in the past. This should be a sign for what you call the “outer right” that we will only inflame the now inevitable escalation of status warfare, as social justice debates hijack attention away from human rationality to value and demographic warfare and people like us are systematically excluded from the intended audience. An explanation of some related costs for those who can’t think of them. I think your and Anissimov’s site More Right makes a nice Schelling point to regroup and continue our exploration of human rationality applied to controversial topics.
HBD = Human BioDiversity, a line of thought which asserts that humans are significantly different genetically. Often called “racism” by people who don’t like it.
To be more clear, HBDers claim that not just that humans differ significantly at a genetic level (that’s pretty uncontroversial: I don’t think anyone is going to argue that genetically inherited disease aren’t a thing for example). As far as I can tell, the HBDers believe that most or almost all of mental traits are genetically determined. Moreover, HBDers seem to generally believe that these genetic traits are distributed in the population in ways that closely match with what are normally seen as ethnic and racial groups, and that that explains most of racial differences in IQ scores, life success, and rates of criminal activity.
The anti-reaction FAQ describes it as “Neoreaction is a political ideology supporting a return to traditional ideas of government and society, especially traditional monarchy and an ethno-nationalist state. It sees itself opposed to modern ideas like democracy, human rights, multiculturalism, and secularism. ” As far as I’m aware, neoreactioaries do not object to that description.
I feel this is a stupid question, but I’d rather ask it than not knowing: Why would anyone want that? I can understand opposing things like: democracy, secularism and multiculturalism, but replacing them with a traditional monarchy just doesn’t seem right. And I don’t mean morally, I just don’t see how it could create a working society.
I can fully understand opposing certain ideas, but if you’re against democracy because it doesn’t work, why go to a system of governance that has previously shown not to work?
If you accept the criticism it makes of democracy you are already basically Neoreactionary. Only about half of them advocate monarchy as what should replace our current order, remember no one said the journalist did an excellent job reporting about us. While I can’t even speak for those who do advocate Monarchy, only for myself, here some of my reasons for finding it well worth investigating and advocating:
Good enough—You need not think it an ideal form of government, but if you look at it and conclude it is better than democracy and nearly anything else tried from time to time so far, why not advocate for it? We know it can be done with humans and can be stable. This is not the case with some of the proposed theoretical forms of government. Social engineering is dangerous, you want fail safes. If you want to be careful and small c-conservative it is hard to do better than monarchy, it is as old as civilization, an institution that can create bronze age empires or transform a feudal society into an industrial one.
Simplicity—Of the proposed other proposed alternative forms of governments it is the one most easily accurately explained to nearly anyone. Simplicity and emotional resonance are important features with many consequences. For example when Moldbuggians say society would benefit from formalization they should aim for a bare bones OS for this to be feasible. Formalization is the process where the gap between the actual and claimed functioning of a social institution is closed as much as possible in order to reduce disinformation. This is considered good because uncertainty results in politics/war. There are also costs for keeping people in positions of responsibility sane and not accidentally ending up believing in disinformation if such is common around them. Not bothering to keep them sane at all seems bad.
Agile experimentation—Social experimentation is however useful, especially since the same solutions won’t work for all kinds of societies in all situations. It is a system that can be easily adjusted for either robustness or flexibility as needed. A monarch has simple logistics to set up or allow social experiments. Futarchy, Neocameralism… why risk running a society on this OS rather than set up a more robust one and then test it within its confines? East India Companies, Free Cities, religious orders are common in the history of Western monarchy. Indeed you can look at Constitutional Monarchy in modern democratic countries as experiments that where either judged successful or an experiment that breached containment. Even in this case of breach the form of monarchy was still preserved however and might possibly be revived at a future point in time.
Responsible ideology crafting—Many Neoreactionaries think the relative political stability of the Western world of the past 70 years will not last. Historically transition from some kind of republic to military dictatorship is common. Rule by leader of victorious conquering army, has historically show successful transition to monarchy, as all dynasties where basically founded by them. Even if in itself such a change isn’t likely in the West, the unlikely situations where neoreactionary criticism of democracy would be taken seriously and guide policy, is one where the most likely victor of the social instability is not an ideal representation of a Neoreactionary CEO philosopher but a military dictator. We should try and plan social reform constrained by logistics of the likeliest outcome of our ideas becoming important, otherwise we are irresponsible. Indeed this might have been the grand crime of Communist theorists.
Low Hanging Fruit—It has been understudied by modern intellectuals who furthermore are biased against it. Compare how much modern theoretical work has been done on Democracy vs. Monarchy. See number wikipedia articles for a quick proxy. This is perhaps practical given the situation we find ourselves in but also somewhat absurd. For example as far as I’m aware no one outside reaction has in depth considered the ridiculously obvious idea of King as Schelling Point! Modern game theory, cognitive science and even sociology unleashed on studying monarchy would reveal treasures, even if we eventually decide we don’t want to implement it.
I was trying to say Neoreactionaries basically only strongly agree on these criticisms, not the particular solutions how to ameliorate such problems. I hope that is apparent from the paragraph?
Neoreactionaries basically only strongly agree on these criticisms, not the particular solutions
How are you going to distinguish them from conservo-libertatians, then? I would imagine they would also agree with much of those criticisms and will disagree as to the proposed solutions.
They don’t use the particular concepts of Neoreaction, things like the Cathedral or the idea Progressivism is the child of Protestant Christianity or why it drifts leftwards. There will be no clear line as both conservo-libertarians and anarcho capitalists are big inspirations to the neoreactionary world view and form a big part of its bedrock. It is observed many reactionaries tend to be ex-libertarians.
I was under the impression that they also tend to agree about certain social issues such as traditional gender roles (though after posting that comment I found out that Moldbug agrees with progressive views about homophobia); am I wrong?
Neoreaction is basically defined as “these particular criticism of Progressivism & Democracy”! I’m not sure you will find common agreement among neoreactionaries on anything else.
Then you either throw up your hands and go meta with secession/seasteading/etc. or try to find existing systems that neither of those systems would apply to… how about Switzerland?
I am curious why Switzerland isn’t more popular among people who want to change the political system. It has direct democracy, decades of success, few problems...
The cynical explanation is that promoting a system someone else invented and tested is not so good for signalling.
I am curious why Switzerland isn’t more popular among people who want to change the political system. It has direct democracy, decades of success, few problems...
The correct question is whether Switzerland’s success is caused by its political system. If not, emulating it won’t help.
We can at least be sure that Switzerland’s success hasn’t been prevented by its political system. This isn’t a proof that the system should be copied, but it’s at least a hint that it should be studied.
Switzerland is pretty small, and it’s not obvious to me that its political system would scale well to larger countries. But then again, it’s not obvious to me that it wouldn’t, either.
My very superficial knowledge says that Switzerland consists of relatively independent regions, which can have different tax rates, and maybe even different laws. These differences allow people to do some lower-scale experiments, and probably allow an individual to feel like a more important part of the whole (one in a few thousands feels better than one in a few millions). I would guess this division to regions is very important.
So a question is, if we wanted to “Switzerland-ize” a larger country, should we aim for the same size (population) or the same number of regions? Greater region size may reduce the effect of an individual feeling important, but greater number of regions could make the interactions among them more complicated. Or maybe the solution would be to have regions and sub-regions, but then it is not obvious (i.e. cannot be copied straightforwardly) what should be the power relationship between the regions and their sub-regions.
It would be safer to try this experiment first in a country of a similar size. Just in case some Illuminati are reading this discussion, I volunteer Slovakia for this experiment, although my countrymen might disagree. Please feel free to ignore them. :D
My very superficial knowledge says that Switzerland consists of relatively independent regions, which can have different tax rates, and maybe even different laws.
Reminds me of some large countries… in North America, I think? :-)
Then again, population-wise it’s bigger than reactionary poster children such as Singapore or Monaco and comparable to progressivist poster children such as Sweden or Denmark.
I want to emphasize again monarchy only recently gained popularity among neoreactionaries, its possible the majority of them still dream of Moldbug’s SovCorps. Anarcho-Papist for example basically believes anarcho-capitalism is best but thinks the Neoreactionary analysis of why society is so leftist is correct.
The popularity of aristocratic and monarchist stories in popular culture—Star Wars, LOTR, The Tudors, Game of Thrones, possibly Reign if its rating improve, etc. - says something about the human mind’s “comfort” with this kind of social organization. David Brin and similar nervous apologists for democracy have that working against them.
I can fully understand opposing certain ideas, but if you’re against democracy because it doesn’t work, why go to a system of governance that has previously shown not to work?
The obvious question here is, why do you think monarchy has been “shown not to work”? Is it because monarchies have had a tendency to turn into democracies? Or perhaps because historical monarchies didn’t have the same level of technology that modern liberal democracies enjoy?
That question is kinda obvious. Thanks for pointing it out.
From what I remember from my History classes, monarchies worked pretty okay with an enlightened autocrat who made benefiting the state and the populace as his or her prime goal. But the problem there was that they didn’t stay in power and they had no real way of making absolutely sure their children had the same values. All it takes to mess things up is one oldest son (or daughter if you do away with the Salic law) who cares more about their own lives than those of the population.
So I don’t think technology level plays a decisive factor. It probably will improve things for the monarchy, since famines are a good way to start a revolution, but giving absolute power to people without a good fail-safe when you’ve got a bad ruler seems like a good way to rot a system from the inside.
I was in a Chinese university around Geoge W. Bush’s second election and afterwards, which didn’t make it easy to convince Chinese students that Democracy was a particularly good system for picking competent leaders (Chinese leaders are often graduates from prestigious universities like Tsinghua (where I was), which is more like MIT than like Yale, and they are generally very serious and competent, though not particularly telegenic). On the other hand, the Chinese system gets you people like Mao.
I don’t think Mao could exactly be said to be a product of the Chinese system, seeing as unless you construe the “Chinese system” to include revolutions, it necessarily postdates him.
I’m not necessarily saying that democracy is the best thing ever. I just have issues jumping from “democracies aren’t really as good as you’re supposed to believe” to “and therefore a monarchy is better.”
I feel I should point out the Chinese system was not what got Mao into power. Instituting the Chinese system is what got him into power. And this system saw massive reform since then.
Bullets 5 and 6 of this MoreRight article point out some reactionary ideas to assuage your concerns. Like Mr. Anissimov notes, it is necessary not only to consider the harm such a failure mode might cause, but also to compare it to failure modes that are likely to arise in demotist systems. Reactionary thought also includes the idea that good systems of government align their incentives such that the well-being of their ruler coincides with that of their people, so a perfectly selfish son should not be nearly as much of a concern as an stupid or evil one.
Picture an alternative Earth Prime where monarchies dominated the political landscape and democracies were seen as inconsequential political curiosities. In this Earth Prime, can you not imagine that textbooks and teachers might instead point out equally plausible-sounding problems with democracy, such as the fact that politicians face selection pressures to cut off their time horizons around the time of their next election? Can you not imagine pointing to small democracies in their world with failures analogous to failures of democracies in our world, and declaring “Q.E.D.”? How sure are you that what you are taught is a complete and unbiased analysis of political history, carried out by sufficiently smart and rational people that massive errors of interpretation are unlikely, and transmitted to you with high fidelity?
How sure are you that what you are taught is a complete and unbiased analysis of political history, carried out by sufficiently smart and rational people that massive errors of interpretation are unlikely, and transmitted to you with high fidelity?
I don’t think you have to be (certainly I am not,) not to put much credence in Reaction. From the premise that political history is conventionally taught in a biased and flawed manner, it does not follow that Reaction is unbiased or correct.
The tendency to see society as being in a constant state of decline, descending from some golden age, is positively ancient, and seems to be capable of arising even in cases where there is no real golden age to look back on, unless society really started going downhill with the invention of writing. There is no shortage of compelling biases to motivate individuals to adopt a Reactionary viewpoint, so for someone attempting to judge how likely the narrative is to be correct, they need to look, not for whether there are arguments for Reaction at all, but whether those arguments are significantly stronger than they would have predicted given a knowledge of how well people tend to support other ideologies outside the mainstream.
I don’t think you have to be (certainly I am not,) not to put much credence in Reaction. From the premise that political history is conventionally taught in a biased and flawed manner, it does not follow that Reaction is unbiased or correct.
Of course not; even if you reject the current conventional narrative, it still takes a lot of evidence to pinpoint Reaction as a plausible alternative (nevermind a substantially correct one). But Mathias was basically saying that the models and case studies of monarchy he studied in his history classes provided him with such a high prior probability that monarchy “doesn’t work” that he couldn’t imagine why anybody could possibly be a monarchist in this day and age. I was arguing that the evidence he received therein might not have been quite as strong as he felt it to be.
Or perhaps because historical monarchies didn’t have the same level of technology that modern liberal democracies enjoy?
At the given time, they were replaced by democracies with the same technology level they had.
The argument could be constructed that for different levels of technology, different form of government is optimal. Which sound plausible. For a very low technology level, living in a tribe was the best way of life. For higher level, it was a theocracy or monarchy. For yet higher level, it was a democracy (and this is why the old monarchies are gone). And for even higher level (today in the first world), it is monarchy again.
It’s a bit suspicious that the monarchy is the optimal form of government twice, but not impossible. (Although it is better to have opinions because most evidence points towards them, not merely because they are not completely impossible.)
Or perhaps because historical monarchies didn’t have the same level of technology that modern liberal democracies enjoy?
At the given time, they were replaced by democracies with the same technology level they had.
That response is nonsense, an unfair reading. Jaime already offered your hypothesis immediately preceding:
Is it because monarchies have had a tendency to turn into democracies?
He explicitly says that means something completely different.
I imagine that he means, quite correctly, that many comparisons between democracies and monarchies fail to compare examples at the same technology level.
As to the other point, I doubt Jaime thinks that monarchies turning into democracies is a very good argument in favor of democracies, just that it is a common implicit argument. I doubt that there are many people who think that monarchy is a good form of government at two technological levels, separated by democracy. Generally people who condemn democracy think that it was a mistake, perhaps historically contingent, or perhaps a natural tendency of technology, but one to be fought. Some reactionaries hold that this is a good time to pursue non-democracies, but usually because democracy is finally self-destructing, not because technological pressures have reversed course.
But monarchies turning into democracies is evidence against the stability of monarchies, and some reactionaries do implicitly make the argument that technology favors monarchy in two different periods.
why go to a system of governance that has previously shown not to work?
Because you are so incredibly smart that today you will get everything right, and those old mistakes done by lesser minds are completely irrelevant...?
Maybe it’s not about people really wanting to live under some majesty’s rule, but about an irresistable opportunity to say that you are smarter than everyone else, and you have already found a solution for all humanity’s problems.
(This was originally my observation of Communists of the smarter type, but it seems to apply to Neoreactionaries as well.)
Even before reading it, I already agree that democracy does not work the way people originally thought it would, and some pretend it works even today. (People voting to get money from their neighbors’ pockets. Idiots who know nothing and want to learn nothing, but their vote is just as important as Einstein’s. Media ownership being the critical factor in elections.)
That just doesn’t give me enough confidence that my solution would be better. Let’s say it would avoid some specific problems of democracy successfully. How about new problems? (Or merely repetition of the old ones, enhanced by the modern technology.)
Einstein was a physicist. He probably had more sense about politics than random inattentive person who votes on the basis of emotion, but I’m going to hope that people who actually know something about politics get influence by writing and/or politicking. Their influence isn’t limited to their vote.
To quote myself on what I consider is plausibly better than democracy:
Futarchy for starters. Neocameralism proposed by Mencius Moldbug might work better but is risky. City state oligarchies. Anarchy-Capitalism if you can get it. A Republic with limited franchise if you can keep it. A properly set up monarchy. Even democratic technocracy, where democratic element would have about as much role in governance as the Monarchy part does in the Constitutional Monarchy of the United Kingdom. Arguably we are nearly there anyway.
Neocameralism in paritcular is something that is possibly still more popular among Neoreactionaries than democracy. Here I briefly explain it:
Neocameralism by Moldbug which is basically to have the state be guided by the profit motive and have such overwhelming military force that uses crypto lock technology to enforce it has no reason to brainwash its citizens since they don’t have the military force to matter politically. They can’t seize the government/companies assets. The profit motive together with the corporate structure keeps most of them from being hijacked by its CEO as well as keeps most of them nice to its customers (citizens). You can make sure it will be nice by give stock options to specialized efficient charities. Basically divide the state between the rent extracting part and the goodness generating part people expect, min-max both, pair them up in a single adventuring party and enjoy your munchkinized society. Obviously it kind of sucks if you discover things people really really like spending money on but hurt them, but hey democracy would collapse at that too.
Well, the neoreactionaries claim that strong monarchies will be more stable, and less subject to needing to satisfy the fickle whims of the population. There is some validity to at least part of the argument: long-term projects may do better in dictatorships. Look for example at the US space program: there’s an argument that part of why it has stalled is that each President, desiring to have a long-lasting legacy, makes major changes to the program’s long-term goals, so every few years a lot of work in progress is scrapped. Certainly that’s happened with the last three Presidents. And the only President whose project really stayed beyond his office was JFK, who had the convenience of being a martyr and having a VP who then cared a lot about the space program’s goals.
However, the more general notion that monarchies are more stable as a whole is empirically false, as discussed in the anti-reaction FAQ.
What I suspect may be happening here is a general love for what is seen as old, from when things were better. Neoreaction may have as its core motivation a combination of cynicism for the modern with romanticism about the past.
If you do read any of the pro-reaction stuff linked to by K (or the steelman of reaction by Yvain) I suggest you then read Yvain’s anti-reaction FAQ which provides a large amount of actual data.
Thank you. I’ll read the FAQ, it seems exhaustive and informative.
And as I hope I made clear, I can certainly understand the notion that “democracy isn’t awesome”. But I don’t get the jump from there to “a monarchy will be better.”
Yvain’s anti-reaction FAQ shows nothing of the sort. It cherry-picks a few examples. To compare the stability of democracies and monarchies, a much broader historical comparison is needed. I’m working on one now, but people should really read their history. Few of those who confidently claim monarchies are unstable have more than a smidgen of serious reading on Renaissance Europe under their belts.
Considering that your response relies heavily on deciding who is or isn’t “demotist”, it might help to address Yvain’s criticism that the idea isn’t a well-defined one. The issue of monarchs who claim to speak for the people is a serious one. Simply labeling dictators one doesn’t like a demotist doesn’t really do much. Similarly, your response also apparently ignores Yvain’s discussion of the British monarchy.
Napoleon was a populist Revolutionary leader. That should be well-understood.
I’m not convinced that this is a meaningful category. It is similarly connected to how you blame assassins and other issues on the populist revolutions: if historically monarchies lead to these repeatedly, then there’s a definite problem in saying that that’s the fault of the demotist tendencies, when the same things have not by and large happens in democracies once they’ve been around for a few years.
Also, while Napoleon styled himself as a populist revolutionary leader, he came to power from the coup of 18 Brumaire, through military strength, not reliance on the common people. In fact, many historians see that event as the end of the French Revolution.
While I understand that responding to everything Yvain has to say is difficult, I’d rather read a complete and persuasive response three months from now than an unpersuasive one right now. By all means, feel free to take your time if you need it.
All of these have issues, I like Nick Land’s one best, Moldbug is probably easier to read if you are used to the writing style here, Scott’s is the best writer of the three, but deficient and makes subtle mistakes since he isn’t reactionary.
My own summary of some points that are often made would be:
If you build a society based on consent, don’t be surprised if consent factories come to dominate your society. What reactionaries call the Cathedral is machinery that naturally arises when the best way to power is hacking opinions of masses of people to consent to whatever you have in store for them. We claim the beliefs this machine produces has no consistent relation to reality and is just stuck in a feedback loop of giving itself more and more power over society. Power in society thus truly lies with the civil service, academia and journalists not elected officials, who have very little to do with actual governing. This can be shown by interesting examples like the EU repeating referendums until they achieve the desired results or Belgium’s 589 days without elected government. Their nongovernment managed to have little difficulty doing things with important political implications like nationalizing a major bank.
Moral Progress hasn’t happened. Moral change has, we rationalize the latter as progress. Whig history is bunk.
The modern world allows only a very small window of allowed policy experimentation. Things like seasteading, charter cities are ideas we like but think will not be allowed to blossom if they should breach the narrow window of experimentation allowed among current Western nations.
Democracy is overvalued, monarchy is undervalued. This translates to some advocating monarchy and others dreaming up new systems of government that take this into account.
McCarthy was basically right about the extent of Communist influence in the United States of America after the 1940s. We have weird things like the Harvard Crimson magazine endorsing the Khmer Rouge in the 70s! or FDR’s main negotiator at Yalta being a Soviet spy cropping up constantly when we examine the strange and alien 20th century. McCarthy used some ethically questionable methods against Communists (and yes most of his targets where actual Communists), but if you check them out in detail you will see they are no more extreme or questionable than the ones we have for nearly 80 years now routinely used against Fascists. Why do we live in a Brown scare society while the short second Red scare is by many treated like one of the gravest threats against liberal democracy ever? Why where western intellectuals consistently deluded on Communism from at least the 1920s to as late as the 1980s if they are as trustworthy as they claim?
Psychological differences exist between ethnic groups and between the sexes and these should have implications for into issues like women in combat, affirmative action or immigration.
The horror show of the aftermath of decolonization in some Third World countries was a preventable disaster on the scale of Communist atrocities.
The first three are meta arguments, that contribute to the last four which are object level assessments, that you can make without resorting to the meta arguments.
The claim that the morality of a society doesn’t steadily, generally, and inexorably increase over time is not the same as the claim that there will be no examples of things that can be reasonably explained as increases in societal morality. If morality is an aggregate of bounded random walks, you’d still expect some of those walks to go up.
To return to the case at hand: the decline of lynching may be an improvement in one area, but you have to weigh it against the explosions in the imprisonment and illegitimacy rates, the total societal collapse of a demographic that makes up over a tenth of the population, drug abuse, knockout games, and so on.
To return to the case at hand: the decline of lynching may be an improvement in one area, but you have to weigh it against the explosions in the imprisonment and illegitimacy rates, the total societal collapse of a demographic that makes up over a tenth of the population, drug abuse, knockout games, and so on.
Do you think there’s a causal connection between the decline of lynching and the various ills you’ve listed?
How is causality relevant? The absence of continuous general increase is enough to falsify the Whig-history hypothesis, given that the Whig-history hypothesis is nothing more than the hypothesis of continuous general increase—unless we add to the hypothesis the possibility of ‘counterrevolutionary’ periods where immoral, anti-Whig groups take power and immorality increases, but expressing concern over things like illegitimacy rates, knockout games, and inner-city dysfunction is an outgroup marker for Whigs.
Demonstrating causality would be doing more work than is necessary. To argue against the hypothesis that the values of A, B, C, … are all increasing, you don’t need to show that an increase in the value of A leads to decreases in any of B, C, …; you just need to demonstrate that the value of at least one of A, B, C, … is not increasing.
(To avert the negative connotations the above paragraph would likely otherwise have: no, I don’t think the decline of lynching caused those various ills.)
To return to the case at hand: the decline of lynching (A) may be an improvement in one area, but you have to weigh it against the explosions in the imprisonment (B) and illegitimacy rates (C), the total societal collapse of a demographic that makes up over a tenth of the population (D), drug abuse (E), knockout games (???), and so on.
(parentheticals added).
You were originally arguing that some weighted sum of A, B, C… was increasing. NancyLebovitz was pointing out that A has clearly decreased, and so for the sum to increase on average, there has to be a correlation between A decreasing and B, C, … increasing. Then she asked if you thought this correlation was causal.
In response, you punted and changed the argument to:
The absence of continuous general increase is enough to falsify the Whig-history hypothesis, given that the Whig-history hypothesis is nothing more than the hypothesis of continuous general increase
which was a really nice tautological argument.
So while showing causality is “more work than is necessary” for disproving the straw-Whiggery of your previous comment, it doesn’t mean anything for the point NancyLebovitz was raising.
I think people not being assaulted and killed by an angry crowd is good. Vigilantism is a sign of a deficient justice system and insufficient pacification of the population, thus poor governance. I’m happy at the reduction of lynching, but I’m unhappy at the increase of other indicators of depacification and deficient justice systems that seem to have grown worse in Western society.
As a side note this is still a disturbingly common phenomena of mob violence from Nigeria to Madagascar, not to mention Southern Asia and some Latin American countries. I’m also sadly quite unconvinced no lynchings occur in Western states for that matter.
If you build a society based on consent, don’t be surprised if consent factories come to dominate your society.
That isn’t an argument amounting to right is right, since the left has its own version...see Chomskys manufactured consent.
What’s more,manufactured consent existed in societies that didn’t run on consent., in the form of actual sermons preached in actual churches and actual cathdrals.
My own attempt at a limited view of moral progress has the following features:
Economic growth, largely driven by secular trends in technology, has resulted in greater surpluses that may be directed towards non-survival goals (c/f Yvain’s “Strive/survive” theorising), some of which form the prerequisites of higher forms of civilisation, and some of which are effectively moral window-dressing.
As per the Cathedral hypothesis, with officially sanctioned knowledge only being related to reality through the likely perverse incentives of the consent factory, this surplus has also been directed towards orthogonal or outright maladaptive goals (in cyclical views of history, Decadence itself).
We no longer have to rationalise the privations of older, poorer societies. This is the sense in which linear moral progress is the most genuine (c/f CEV).
The interaction between the dynamics of holier-than-thou moralising and the anticipatory experience of no longer having to rationalise poverty is complicated. Examination of history reveals the drive for levelling and equalisation to be omnipresent, if not consistently exploitable.
No. Well, maybe the third paragraph, except that it’s part of history now and for that reason should be left alone. But otherwise, both your distancing of MoreRight from LessWrong and Eliezer’s distancing of LessWrong from the reactosphere are appropriate and relevant statements of true things.
Maybe I can. It seems Elezier was hurriedly trying to make the point that he’s not affiliated with neoreactionaries, out of fear of the name of LessWrong being besmirched.
It’s definitely true, I think, that Elezier is not a neoreactionary and that LessWrong is not a neoreactionary place. Perhaps the source of confusion is that the discussions we have on this website are highly unusual compared to the internet at large and would be extremely unfamiliar and confusing to people with a more politically-oriented mind-killed mindset.
For example, I could see how someone could read a comment like “What is the utility of killing ten sad people vs one happy person” (that perhaps has a lot of upvotes) - which is a perfectly valid and serious question when talking about FAI—and erroneously interpret that as this community supporting, say, eugenics. Even though we both know that the person who asked that question on this site probably didn’t even have eugenics cross their mind.
(I’m just giving this as an example. You could also point to comments about democracy, intersexual relationships, human psychology, etc.)
The problem is that the inferential distance between these sorts of discussions and political discussions is just too large.
Instead of just being reactionary and saying “LessWrong doesn’t support blabla”, it would have been better if Elezier just recommended the author of that post to read the rationality materials on this site.
LessWrong is about the only public forum outside their own blog network that gives neoreaction any airtime at all. It’s certainly the only place I’ve tripped over them.
On the other hand, I at least found the conversation about neoreaction on LW to be vague and confusing and had basically no idea of what the movement was about until I read Yvain’s pieces.
it would have been better if Elezier just recommended the author of that post to read the rationality materials on this site.
I find it unlikely that the author would do that, or have the right mindset even if he did. So do you mean this would have been more optimal signaling somehow?
Perhaps signaling, and also to get people who are reading the article and comment section to read more about LessWrong instead of coming to possibly the wrong conclusion.
The best move for Eliezer to disassociate LessWrong from reactionaries would be to not mention them at all. Do you see anyone defending the honor of Hacker News in the comment section? Think about what your first instinct is when you say heard someone from some organization, that you know nothing about, explaining they are not actually right wing or Communist or even better, racist?
Eliezer’s comment hurt my feelings and I’m not sure why it was really necessary. Responding to something just reinforces the original idea. If rationalists want to reject the Enlightenment, we should have every right to do so, without Eliezer proclaiming that it’s not canon for this community.
If I had still been working for MIRI now, would I be fired because of my political beliefs? That’s the question bothering me. Are brilliant mathematicians going to be excluded from MIRI for having reactionary views?
Part of the comment is basically like, “Scott Alexander good boy. We have paid him recently. Anissimov bad. Bad Anissimov no work for us no more.”
Eliezer’s comment hurt my feelings and I’m not sure why it was really necessary. Responding to something just reinforces the original idea. If rationalists want to reject the Enlightenment, we should have every right to do so, without Eliezer proclaiming that it’s not canon for this community.
You claim a right not to have your feelings hurt that overrules Eliezer’s right to speak on the matter? That concept of offense-based rights and freedom to say only nice things is one that I am more used to seeing neoreactionaries find in their hated enemies, the progressives. Are you sure you know where you are actually standing?
Eliezer has made a true statement: that neoreaction is not canon for LessWrong or MIRI, in response to an article strongly suggesting the opposite.
Elsethread you write:
The fact that Eliezer felt the need to respond explicitly to these two points with an official-sounding disavowal shows hypersensitivity
So Eliezer shouldn’t say anything, because:
He’s hurting your feelings.
He’s being hypersensitive. Thank you for making this so clear.
Apparently the supposed Streisand effect applies to him responding to Klint but not to you responding to him. How does that one go?
“Responding to something just reinforces the original idea” touts timidity as a virtue—again, not a sentiment I would ever expect to see penned by any of the neoreactionaries I have read. These are the words of a sheep in wolf’s clothing.
And btw, it looks to me like Eliezer’s wasn’t an official-sounding disavowal, it was an official disavowal.
Your response to Eliezer, both here and in the other thread, comes across as a completely unjustified refusal to take his comment at face-value: Eliezer explaining that he concluded your views were not worth spending time on for quite rational reasons, and is saying so because he doesn’t want people thinking he or the majority of the community he leads hold views which they don’t in fact hold.
This seems to be part of a pattern with you: you refuse to accept that people (especially smart people) really disagree with you, and aren’t just lying about their views for fear or reputational consequences. It’s reminiscent of creationists who insist there’s a big conspiracy among scienitsts to suppress their revolutionary ideas. And it contributes to me being glad that you are no longer working for MIRI, for much the same reasons that I am glad MIRI does not employ any outspoken creationists.
I find this comment a bit mean (and meaner than most of what I saw in this thread or the linked one, tho I haven’t read that one in much detail).
Maybe it’s because other people feel more strongly about this topic than I do; to me “democracy vs. monarchy” is both a confused and fuzzy question and an irrelevant one. Maybe with a lot of effort one can clarify the question and with even more effort, come up with an answer, but then it has no practical consequences.
Not mean-spirited. Just honest. If this were a private conversation, I’d keep my thoughts to myself and leave in search of more rational company, but when someone starts publicly saying things like...
“Eliezer [is] proclaiming that it’s not canon for this community.”
“The comment is basically like, ‘Scott Alexander good boy. We have paid him recently. Anissimov bad. Bad Anissimov no work for us no more.’”
Accusing Eliezer of dismissing an idea out of hand due to fear of public unpopularity.
(all of which are grossly unfair readings of Eliezer’s coment)
Not that much more unfair than proclaiming something thoroughly refuted and uninteresting based on a single post rebutting the least interesting claims of only two authors, especially given that what appears to have gotten picked up as the central point of the post (NK/SK) is wrong on many different levels.
Hm, I didn’t feel that Eliezer was being particularly dismissive (and am somewhat surprised by the level of the reactions in this thread here). The original post sort-of insinuated that MIRI was linked to neoreaction, so Eliezer correctly pointed out that MIRI was even more closely linked to criticism of Neoreaction, which seems like what anybody would do if he found himself associated with an ideology he disagreed with—regardless of the public relations fallout of that ideology.
Reminder that the article just said neoreactionaries “crop up” at Less Wrong. Then the author referred to a “conspiracy,” which he admits is just a joke and explicitly says he doesn’t actually believe in it. The fact that Eliezer felt the need to respond explicitly to these two points with an official-sounding disavowal shows hypersensitivity, just like he displayed hypersensitivity in his tone when he reacted to the “Why is Moldbug so popular on Less Wrong?” thread. The tone is one of “Get it off me! Get it off me! Aiyeee!” If he actually wanted to achieve the “get it off me” goal, indifference would be a more effective response.
Does no official response from Hacker News, which also received the damning accusation that neoreactionaries “crop up” there, imply consent and agreement from Y Combinator?
There’s a difference between “neoreactionary” and “expresses skepticism against Progressive Orthodoxy”. Paul Graham might be guilty of the latter, but there’s certainly little evidence to judge him guilty of the former.
Paul Graham might be guilty of the latter, but there’s certainly little evidence to judge him guilty of the former.
I wasn’t aware we were a courtroom and we were holding our opinions to a level of ‘beyond a reasonable doubt’. I was pointing out that silence is often consent & agreement (which it certainly is), that PG has expressed quite a few opinions a neoreactionary might also hold (consistent with holding neoreactionary views, albeit weak evidence), and he has been silent on the article (weak evidence, to be sure, but again, consistent).
that PG has expressed quite a few opinions a neoreactionary might also hold
IAWYC but the relevant standard is “which a neoreactionary is more likely to hold than a non-reactionary”. I’d guess both Ozy Frantz and Eugine_Nier would agree about the colour of the sky, but...
You should know perfectly well that as long as MIRI needs to coexist and cooperate with the Cathedral (as colleges are the main source of mathematicians) they can’t afford to be thought of as right wing. Take comfort at least in knowing that whatever Eliezer says publicly is not very strong evidence of any actual feelings he may or may not have about you.
I can’t figure out whether the critics believe the Cathedral is right-wing paranoia or a real thing.
MIRI is seen as apolitical. I doubt an offhand mention in a TechCrunch hatchet job is going to change that, but a firm public disavowal might, per the Streisand effect.
From reading HPMOR and some of the sequences (I’m very slowly working my way through them) I get the impression that Eliezer is very pro-enlightenment. I can’t imagine that he’d often explicitly claim to be pro-enlightenment if he weren’t, rather than simply avoiding the whole issue.
being pro-enlightment from the perspective of a science fanboy and poly amorous atheist is different than being pro-enlightment as a direct counterargument to reactionary thought. Certainly before I read NR stuff I never thought a reasonable person could claim the enlightenment was a bad thing.
Special case. This site is based around his work so he has every right to decide what it is officially linked to, but the tone of his remarks seemed to go much further than merely disavowing an official connection. Eliezer also states, “More Right” is not any kind of acknowledged offspring of Less Wrong nor is it so much as linked to by the Less Wrong site.”, but More Right is indeed linked to in the blogs section of the Wiki, last time I checked. Also, More Right was founded by LessWrong rationalists applying rationality to reactionary ideas. More Right is indeed an indirect offspring of the LessWrong community, whether community leaders like it or not.
But you’re not a brilliant mathematician – you shouldn’t (even rhetorically) evaluate the consequences of your political actions as they would relate to a hypothetical highly-atypical person. Of course, a genius ( being of immense value) has lots of wiggle room. But you’re not one.
If you still worked at MIRI, you would have negative value. That is, the risk of someone using your writings to tar and feather MIRI would be higher than the expected value of employing you. It’s likely you would be fired, as it would be a rational move. I have no idea how good you were at whatever it was you did for MIRI, but it’s likely there are plenty of candidates of equal abilities who are not publishing blogs that pattern-match with fascist literature.
As being thought of in a political light (especially a political light that the vast majority of prospective contributors and donors find distasteful) would certainly harm MIRI, how could you possibly be offended by something so predictable?
Marcus Hutter just had a little article about AIXI published on the Australian website “The Conversation”. Not much there that will be new to LW readers, though. Includes a link to his presentation at the 2012 Singularity Summit.
Discussion on Hacker News. (As happens far too often on HN, the link there is to the blogspam site phys.org rather than to the original source.)
It’s actually somewhat less of a problem on Hacker News than many sites, because the editors there will proactively change the source to the more canonical. For example, this particular post now points to the original article.
I do sometimes get mildly annoyed at people linking to a generic news site about some research (nytimes, etc.) instead of digging up a source as close to the original as possible.
It may not be “blogspam” proper, but it’s research or tech being summarized by journalists, which tends to be less accurate than the original source.
I just realized it’s possible to explain people picking dust in the torture vs. dust specks question using only scope insensitivity and no other mistakes. I’m sure that’s not original, but I bet this is what’s going on in the head of a normal person when they pick the specks.
The dust speck “dillema”—like a lot of the other exercises that get the mathematically wrong answer from most people is triggering a very valuable heuristic. - The “you are trying to con me into doing evil, so fuck off” Heuristic. Consider the problem as you would of it was a problem you were presented with in real life.
The negative utility of the “Torture” choice is nigh-100% certain. It is in your physical presence, you can verify it, and “one person gets tortured” is the kind of event that happens in real life with depressing frequency. The “Billions of people get exposed to very minor annoyance” choice? How is that causal chain supposed to work, anyway? So that choice gets assigned a very high probability of being a lie.
And it is the kind of lie people encounter very frequently. False hypotheticals in which large numbers of people suffer if you do not take a certain action are a common lever for cons. From a certain perspective, this is what religion is—Attempts to hack people’s utility functions by inserting so absurdly large numbers into the equations so that if you assign any probability at all to them being true they become dominant.
So claims that look like this class of attack routinely get assigned a probability of zero unless they have very strong evidence backing them up because that is the only way to defend against this kind of mental malware.
This is essentially an outside-an-argument argument. If we really had a choice between 50 years of torture and 3^^^3 dust specks, the rational choice would be the 50 years of torture. But the probability of this description of the situation being true, is extremely low.
If you, as a human, in a real-life situation believe that you are choosing between 50 years of torture and 3^^^3 dust specks, almost certainly you are confused or insane. There will not be the 3^^^3 dust specks, regardless of whichever clever argument has convinced you so; you are choosing between an imaginary amount of dust specks and probably a real torture, in which case you should be against the torture.
The only situations where you can find this dilemma in real life are the “Pascal’s mugging” scenarios. Imagine that you want to use glasses to protect your eyes, and your crazy neighbor tells you: “I read in my horoscope today that if you use those glasses, a devil will torture you for 50 years”. You estimate the probability of this to be very low, so you use the glasses despite the warning. But as we know, the probability is never literally zero—you chose avoiding some dust specks in exchange for maybe 1/3^^^3 chance of being tortured for 50 years. And this is a choice reasonable people do all the time.
Summary: In real life it is unlikely to encounter extremely large numbers, so we should be suspicious about them. But it is not unlikely to encounter extremely small probabilities. Mathematically, this is equivalent, but our intuitions say otherwise.
It probably goes like this: “Well, 3^^^3 is a big number; something like 100. Would I torture a person to prevent 100 people having a dust speck in their eyes? How about 200 or 1000? No, this is obviously a madness.”
Yes. This. Whenever I talk with anyone about the Torture vs. Dust Specks problem, I constantly see them falling into this trap. See, for instance, this discussion post from a few months back, and my reply to it.
This happens again and again, and by this point I am pretty sure that the whole problem boils down to just this.
I know that MIRI doesn’t aim to actually build a FAI and they mainly try to provide material for those that seriously try to do it in the future. At least this is my understanding, correct me if I’m wrong.
But has the work MIRI has done so far brought us closer to building a working FAI or any kind of AGI for that matter? Even a tiny bit?
From what I’ve seen in the last year, MIRI has sort-of backpedaled on the “actually building an AGI/FAI” goal, and pushed forward in their public declarations the “milder” goal of ensuring a positive impact of AGI when it finally gets created by someone.
Are you sure? The linked comment was posted in August of this year. I have seen them push the second goal more strongly recently, but I don’t think the first goal has been rescinded.
Sparked by the recent thread(s) on the Brain Preservation Foundation and by my Grandfather starting to undergo radiation+chemo for some form of cancer. While timing isn’t critical yet, I’m tentatively trying to convince my Mother (who has an active hand in her Fathers’ treatment) into considering preservation as an option.
What I’m looking for is financial and logistical information of how one goes about arranging this starting from a non-US country, so if anyone can point me at it I’d much appreciate it.
They can fly patients in from non-US countries, but they recommend that patients be in the USA if they feel they have a significant risk of death because it greatly decreases the risk of brain damage during transport.
Also, most alcor members finance their preservation through health insurance. This might complicate things for your situation.
The statistical and econometrics literature on causality is more focused on “effects of causes” than on “causes of effects.” That is, in the standard approach it is natural to study the effect of a treatment, but it is not in general possible to define the causes of any particular outcome. This has led some researchers to dismiss the search for causes as “cocktail party chatter” that is outside the realm of science. We argue here that the search for causes can be understood within traditional statistical frameworks as a part of model checking and hypothesis generation. We argue that it can make sense to ask questions about the causes of effects, but the answers to these questions will be in terms of effects of causes.
I recently started using Habit RPG, which is a sort of gamified todolist where you get gold and XP for doing your tasks and not doing what you disapprove of.
Previously I had been mostly using Wunderlist (I also tried Remember The Milk, but found the features too limited), and so far Habit RPG looks better than Wunderlist on some aspects (more fine-grained control of the kind of tasks you put in it, regular vs. one-off vs. habits), and of course has an extra fun aspect.
Anybody else been trying it? (I saw it mentioned a few times on LW) Anybody else want to try?
I added so many dailies that I basically found the overhead of updating HRPG ate into the productive energy needed to complete the dailies, and ended up in a death cycle. I then decided to use it explicitly only for tasks at work, and have been keeping up with it ever since, but I do miss the sweet spot of personal life productivity that I hit before that crash.
I use it. I find that it doesn’t motivate me too much to do things unless they’re dailies or I treat them as such; for example, I’ve had “compliment someone” as a habit for months, but I don’t think it’s actually made me compliment people more except maybe a small burst when I added it. But I find it more rewarding (not necessarily more effective) than beeminder for dailies; I don’t care about the yellow brick road, but I enjoy getting XP and gold and pets.
I wouldn’t say either is better. HRPG doesn’t easily let you say “I want to do this X times a week” for values of X other than 7, unless you want to specify the days in advance. I also think (with not especially high confidence) that “I’m about to fail at beeminder” is more motivating than “I’m about to lose health in HRPG” even if I don’t have a pledge; but if I’m ahead, HRPG is better at motivating me to keep going. My HRPG has only occasionally got to the point where I was worried about dying, and then fear of failure was more motivating than normal, but usually I can coast.
I didn’t tried it, but I believe I wrote my own gamification system. It is a simple time tracking system with a history viewer, from which I clock in my activity for the day. I have my own goals that I tried to achieve everyday. It was very effective in keeping me working 30 hours each week for 7 weeks so far.
I mostly use the web-based version, so memory leaks aren’t really an issue. On what platform did you have problems? (I also have the android app, but don’t use it much)
See here and here. I used the web version, which was extremely buggy. From your comments I assume it might not be that terrible anymore, so I’ll look at using it again.
I use it extensively and it’s been by far the most successful out of all the productivity systems I’ve tried. For one, I’ve stuck with it since February of this year, whereas most of my past attempts have lasted around a month. For another, there’s enough community aspects to it that a couple of times when my productivity has been low I’ve gotten a lot of encouragement from the community to get started again
You and I were talking about this in IRC. I remember expressing a concern about HabitRPG that, while it does genuinely motivate me at the moment, I’m not sure what’s going to happen when it ends: when I’ve upgraded all my items, when I’ve collected all the pets, etc etc. If I just start over, the new game will likely motivate me significantly less than the first time around. And more than likely I just plain won’t want to start over.
I’ve been trying to think of ways around this gamification problem, because it plays a part in nearly every attempt at gamification I’ve seen. I think that, for one aspect of gamification—motivating yourself to learn new things—there is a way that at least sort of overcomes the ‘what happens when it ends?’ problem:
Skill Trees. Like This . Maybe a website, or application, that starts with just the bare-bones code for creating skill trees, and you can create an account and add a skill tree to your account from a list of existing searchable skill trees, or you can create your own skill tree if you can’t find one that’s appropriate for you and that will allow other people with similar goals to add your skill tree system to their account, etc.
BTW I’ve started a LessWrong Party on HabitRPG for when they start implementing new mechanics that will take advantage of parties. If anybody wants to join the party send me your User ID, which you can find in Settings > API
Skill Trees. Like This . Maybe a website, or application, that starts with just the bare-bones code for creating skill trees, and you can create an account and add a skill tree to your account from a list of existing searchable skill trees, or you can create your own skill tree if you can’t find one that’s appropriate for you and that will allow other people with similar goals to add your skill tree system to their account, etc.
That definitely looks interesting, and I’ve been thinking about pretty similar things. I hadn’t found Dungeons and Developers, and it is pretty neat (though it’s just a proof-of-concept / a fancy way of displaying a list of web development skills).
Looks kinda interesting. When I was 18 or so, I tried setting up a similar system, but it failed for a couple of reasons. I’ll give it a try. Since I’ve noticed that small (online) notifications (such as upvotes or reblogs) make me feel good, this to-do list might trigger a similar response.
We were unable to determine if there is a Less Wrong wiki account registered to your account. If you do not have an account and would like one, please go to your preferences page.
I have seen, indeed, options to create a wiki account. But I already have one; how do I properly associate the existing accounts?
Experience that I had recently that I found interesting:
So, you may have noticed that I’m interested in causality. Part of my upcoming research is using pcalg (which you may have heard of) to identify the relationships between sensors on semiconductor manufacturing equipment, so that we can apply work done earlier in my lab where we identify which subsystem of a complex dynamic system is the root cause of an error. It’s previously been applied in automotive engineering, where we have strong first principles models of how the systems interact, but now we want to do it in semiconductor, where we don’t have first principles models of how the systems interact, and need to learn those models from data.
Time to get R installed and pcalg downloaded correctly on Ubuntu: ~2 hours. (One of the packages that pcalg requires requires R 3.0, which you need to modify the ubuntu update files to get instead of R 2.14, and a handful of other things went wrong.)
Time to figure out how to get my data into R with labels: ~2 minutes.
Time to run the algorithms to discover the causal network for the subsystem I have data for now: ~2 seconds.
I’m not sure I should also count the time spent learning about causality in the first place (which I would probably estimate at ~2 weeks), but it’s striking how much of the investment in generating the results is capital, and how little of it is labor. That is, now that I have the package downloaded, I can do this easily for other datasets. Time to start picking some low-hanging fruit.
(Living in the future is awesome: as much as I complained about all the various rabbit holes I had to go down while installing pcalg, it took way less time than it would have taken me to code the algorithms myself, and I doubt I would have done anywhere near as a good a job at it.)
I’m not sure I should also count the time spent learning about causality in the first place (which I would probably estimate at ~2 weeks), but it’s striking how much of the investment in generating the results is capital, and how little of it is labor. That is, now that I have the package downloaded, I can do this easily for other datasets. Time to start picking some low-hanging fruit.
Absolutely. When I look at my own projects, they go like ‘gathering and cleaning data: 2 months. Figuring out the right analysis the first time: 2 days. Runtime of analysis: 2 hours.’
The first time this happened to me, it drove me nuts. It reminded me of writing my first program, where it took maybe 20 minutes to write it even looking everything up, and then 2 hours to debug. That was when the true horror of programming struck me. Years later, when I came across the famous quote by Wilkes that “I can remember the exact instant when I realized that a large part of my life from then on was going to be spent in finding mistakes in my own programs.” I instantly knew it for truth.
That was when the true horror of programming struck me. Years later, when I came across the famous quote by Wilkes that “I can remember the exact instant when I realized that a large part of my life from then on was going to be spent in finding mistakes in my own programs.” I instantly knew it for truth.
This realization caused one of my big life mistakes, I think. It struck me in high school, and so I foolishly switched my focus from computer science to physics (I think, there might have been a subject or two in between) because I disliked debugging. Later, I realized that programming was both powerful and inescapable, and so I’d have to get over how much debugging sucked, which by then I had the emotional maturity to do (and I suspect I could have done back then, if I had also had the realization that much of my intellectual output would be programming in basically any field).
I think the whole experience is also interesting on a meta-level. Since programming is essentially the same as logical reasoning, it goes to show that humans are very nearly incapable of creating long chains of reasoning without making mistakes, often extremely subtle ones. Sometimes finding them provides insight (especially in multi-threaded code or with memory manipulation), although most often it’s just you failing to pay attention.
Threading is not normally part of logical reasoning. Compare with mathematics, where even flawed proofs are usually (though not always) of correct results. I think a large part of the difficulty of correct programming is the immaturity of our tools.
What is the problem with grounding medical practice in the cold logic of numbers? In theory, nothing. But in practice, as decades of work in fields like behavioral economics have shown, people — patients and doctors alike — often have a hard time making sense of quantified risks. Douglas B. White, a researcher at the University of Pittsburgh, has shown that the family members of seriously ill patients, when presented with dire prognoses, typically offer quite variable understandings not only of qualitative terms such as “extremely likely” but also of quantitative terms such as “5 percent.” We like our numbers, but despite our desire for better information and an ethic of “informed consent,” we don’t know how to use them.
Far more worrisome is where the numbers come from. Until the last decade or so, estimates of risk came from a doctor’s head. Now the numbers often come from a machine, which makes them seem objective and credible. But like Dorothy confronting the Wizard of Oz, we need to look behind the curtain. It seems that anyone with a Big Data set and a statistics software package can develop an algorithm, give it a user-friendly interface, and behold: Your future is foretold. It’s fast. It’s simple. But it’s opaque, and it may be wrong.
The Omnibus Risk Estimator is one of many available cardiovascular disease risk calculators. When you enter a patient’s data into them, you get a disturbingly wide range of results. Depending on which algorithm you use, you may need a lifetime of statin therapy. Or not.
I will posit that we can actually reduce civilian casualties through the use of [autonomous lethal robots], just as we do with precision-guided munitions, if… the technology is developed carefully. Only under those circumstances should the technology be released into the battlefield...
I am not a proponent for lethal autonomous robots… The question is… if we are in a war, how can we ensure these systems behave appropriately?
...I am not averse to a ban...
...Part of [what drew] me into this discussion is that I’ve been part of the development of [autonomous robots] for 25 years, so I have to bear some of the responsibility for the advent of this technology...
Atrocity is an inherently human behavior, one that does not have to be replicated in a robotic system...
I’ve argued… for a moratorium as opposed to an outright ban. We need to take the time to think about what we’re defining, what we’re banning… we need to be able to determine: “Can research ultimately reduce human casualties in the battlespace?” To me that’s a consequentialist argument. If we can save lives through the use of this technology… [then] there is a moral imperative for them to be used.
This is a research hypothesis, it is not a definitive statement. I am optimistic that this level of performance can be achieved, for two reasons: one is [that]… machines are getting stronger and smarter and more capable than humans are. If you look at human performance in the battlefield, that is a relatively low bar...
But until we can [design robotic platforms that can reduce human casualties], we have to be circumspect about how we move forward. And that requires a pause. We need to be able to investigate whether this is a feasible solution.
Google does this for the same reason, by the way. Google argues that human drivers are the most dangerous thing on the road, and if you want to save lives, we’ve got to get people out of cars and get robots driving them.
...Assuming wars will continue… what is the appropriate role of technology.… Nations all across the world… use [this technology] for force multiplication… to extend the fighter’s reach, [etc.], but there has been [almost no] work on the question of how we can reduce noncombatant casualties.
Recently, Human Rights Watch… has come out with a call for a ban… along with many other NGOs… Shortly thereafter… coincidentally, the US Department of Defense mandated restrictions on the development of these things in what I call a “quasi-moratorium”, where certain classes of these systems are not to be developed for at least 10 years, and in 5 years we’ll revisit whether 10 years is enough...
People are correctly saying “We need to examine this, we cannot blindly go forward”...
...My underlying research thesis… is not a short-term research agenda, and it requires a substantial research effort… by a whole community… that robots can ultimately be more humane than human beings in military situations. They will never be perfectly ethical. They will make mistakes. But if they make fewer mistakes than human warfighters do, that translates into the saving noncombatant lives.
I often find that postings voted little nonetheless sometimes come with a lively and up-voted discussion.
The listing of postings shows the karma of the posting but gives no indication of the volume and quality of the discussion.
At least for the display of a positing it should be easy to display an additional karma score indicating the sum total of the conments. That’d give an indication of the aggregate.
This would improve awareness of positing discussions.
On the other hand such a scoring might further drain away participation from topics which fail to attract discussion.
As long as the controversial discussions are up-voted that shouldn’t be a problem.
Except if you disagree with the classical system of thesis, antithesis, synthesis.
Indeed. But that doesn’t mean that we cannot infer signal from that.
Human emotions are also primary signals. And nonetheless you can e.g. use perception of shouting (accomanying anger) to locate conflict areas in a social group. In a way karma expenditure is such a shouting and draws attention.
The problem somewhat is that karma is one-dimensional. Each emotion-pair is a dimension and we have no way to signal e.g. happiness, fear, awe, … Slashdot for example has the funny tag. That could be used.
And entirely different approach would be to vote the votes. But for that the votes would need to be visible. And voting would have to have an associated cost.
One obvious problem with that system; what happens with habitually bad posters?
Let’s say I write something so insipid and worthless that it’s worth every downvote on the site… and then a better-quality poster writes an excellent point-by-point take-down of it and gets tons of upvotes for it. Should I then benefit from “generating” such a high quality rebuttal, or is that just going to weaken the already weak incentive structure the karma system is supposed to be creating?
I can think of a good case just in the last few days of a poor-quality poster who would seriously benefit from this system, and as a long time poster here you can probably think of more.
I don’t think that an excellent point-by-point take-down in a comment is not a good idea because
a) It is not very visible and if it is excellent and makes a point it should be done as an independent posting.
b) Writing a point-by-point take-down may be overkill and alienate the initial poster. Compensating him with karma may make good for this (and motivate the commenter to post separately).
c) I think individual counters should be addressed by individual comments to allow the to be voted and commented individually.
In the remaining cases and if the point-by-point reply is well meaning and clarifies matters that were unclear for the initial poster: Why shouldn’t he get some credit for honstly (possibly mustering some courage) bringung up a question?
It is of course important to choose a suitable fraction of the comment karma.
Let’s say I write something so insipid and worthless that it’s worth every downvote on the site… and then a better-quality poster writes an excellent point-by-point take-down of it and gets tons of upvotes for it.
You have motivated the better poster to write an excellent post.
If you original post was really insipid and useless it would just be ignored. Capable people rarely waste effort on refuting truly worthless stuff.
Not very. Note my hedging in mentioning “capable” people :-)
I think that in the short term there is the incentive to pile onto the stupid post and shred it to bits. But the bloom on this flower fades very rapidly. Smart people tend to realize that it’s not a good use of their time.
Contrast this to a nonstupid but controversial position which motivates someone to write an excellent piece—for an example consider Yvain’s anti-neoreactionary FAQ.
Thinking a bit about it I don’t think it would be a good idea to add the generated karma to the top poster or even a fixed fraction of it. The fraction should decrease with distance from the root because all intermediate commentes also deserve a share of the pot and the pot shouldn’t increase by itself. The function of distance should be between harmonic and exponential I think. Or it could be tuned by sampling actual comment trees (only that your whole idea is to influence the shape of the tree).
I am interested in getting better at negotiation. I have read lots on the subject, but I have realised that I was not very careful about what I read, and what evidence the authors had for their advice. I have stumbled on a useful heuristic to find better writing.
The conventional wisdom in negotiating says you should try very hard to not be the first person to mention a price.
I can see that, if you’re the seller, it may make sense to try to convince the buyer that they want the product before you start the price negotiation.
When it comes to the price negotiation though, I am not confident about the conventional wisdom.
Given what we know about anchoring, it seems more likely that the first person to say a figure will tend to anchor the negotiation to that figure, and this will influence the negotiation towards that figure.
I hadn’t read anything discussing this until I thought of it myself. But searching for ‘anchoring’ as well as ‘negotiation’ qives plenty of links to negotiation advice that looks much more evidence-based than what I’d read before: example. Checking for ‘anchoring’ in the index would probably work as a filtering technique for books on the subject, but I have not tested this.
“Different cars have different prices. I have seen ones worth $ 100 000, but I guess this one really isn’t one of them. How much would you give for this car?”
I’ve used anchoring successfully when negotiating a TV price at a large electronic retailer. I used the price of a cheaper TV with less features as an anchor. YMMV.
A while back, I tried to learn more about the efficacy of various blindness organizations. Recently, I realized that (1) I really need some sort of training, and (2) my research method was awful, and I should have just looked up notable blind people and looked for patterns in their training.
I avoided doing the “look at blind people” research out of it sounding boring, until not even an hour ago, when I just opened all the pages on blind Americans on Wikipedia and skimmed them all for the information in question.
Either most notable blind Americans don’t get any noteworthy training, or this was left out of a great many wiki pages. Only four persons had specific training mentioned, all from different, mostly local schools; one other had training implied (a guidedog was mentioned), while most of the others could conceivably ignore anything more than a braille/cane instructor (and several could have gone without those, based on their articles).
So, apparently, no organization has a monopoly on success, however loudly some proclaim otherwise. This doesn’t help me shop for a training center/method, but it’s data.
Just finished MaddAddam, the conclusion of Margaret Atwood’sOryx and Crake dystopian trilogy. It is very well written and rather believable. Her dry sardonic humor is without peer. There are many LW-relevant ideas and quotes in it, but here is one that is most un-LW, about a cryo preservation company CryoJeenyus:
“Your friend has unfortunately had a life-suspending event”
“I’m so sorry for your temporary loss.”
“Temporary Inertness Caretaker”
“a ferrying of the subject of a life-suspending event from the shore of life on a round trip back to the shore of life. It was a mouthful, but CryoJeenyus went in for that kind of evasive crapspeak. They had to, considering the business they were in: their two best sales aids being gullibility and unfounded hope. CryoJeenyus did not call those vehicles “hearses”: they were “Life2Life Shuttles”.
What the hell is going on with all the ads here? I’ve got keywords highlighted in green that pop up ads when you mouse over them, stuff in the top and sidebars of the screen, popups when loading new pages… all of this since yesterday.
Normally I would think this sort of thing meant I had a virus (and I am scanning for one with everything I have) but other people have been complaining about stuff like this as well over the last few days.
I would be glad to donate if the site needs more money to stay up, but this is absolutely unacceptable.
This is an extraordinary claim by Eliezer Yudkowsky that progress is a ratchet that moves in only one direction. I wonder what, say, the native Americans circa 1850 thought about Western notions of progress? If you equate “power” with “progress” this claim is somewhat believable, but if you’re also trying to morally characterize the arc of history then it sounds like you’ve descended into progressive cultism and fanaticism.
So, now replying knowing your context, this actually came up in discussion with Eliezer at the dinner after his talk at MIT. The most agreed upon counterexample was more restrictive drug laws. But if one interprets Eliezer’s statement as being slightly more poetic and allowing that occasional slips do occur but that the general trend is uni-directional, that looks much more plausible. And the opinion of the general American population in 1850 in many ways doesn’t enter into that: most of that population took for granted factually incorrect statements about the universe that we can confidently say are wrong (e.g. not just religious belief but belief in a literal global flood and many other aspects of the Abrahamic religions which are demonstrably false).
The most agreed upon counterexample was more restrictive drug laws.
What is the example? that restrictive laws go against the Enlightenment? or that Prohibition was reversed and people are expecting other drug laws to be reversed?
Without a metric, how is this any different from saying that the fall of monarchies in the 20th century is a step backwards?
You were responding to DI, who asked about Native Americans of 1850. Like those of today, they would probably applaud restrictions on alcohol and condemn restrictions on their own intoxicants substances. A very simple, first approximation theory of prohibition is that the West conquered the world and restricted intoxicants to its favorite. It was too late to ban coffee and cigarettes (and maybe stimulants are held to different standards), but it banned other drugs as soon as it noticed them.
Sure, people in 1850 had lots of false beliefs. Which ones are relevant to drugs?
That wasn’t the point of the drug example. The point of the drug example was that Eliezer agrees that morality hasn’t gone in an absolute one way direction.
Fine, but by making “less factually incorrect statements about the universe” your measure of the good, you’ve essentially assumed what you’re trying to show—the superiority of Enlightenment-based notions of progress.
Fine, but by making “less factually incorrect statements about the universe” your measure of progress, you’ve essentially assumed what you’re trying to show—the superiority of Enlightenment-based, progressive civilization.
Not really. Someone can have a detailed and correct understanding of the universe and not have that impact there morals. What’s relevant here is that some of those aspects directly inform morals. We know now that an Abrahamic deity is extremely unlikely or for that matter most other classical deity notions. Thus, morals, values or general deontological rules based on divine revelation are not by themselves worth looking at. Similarly, at a meta-level we know that when people do discuss issues where morality disagrees to pay less attention to arguments based off of religious texts.
You could set up a comment voting system based on the theory of forum quality that only high-value comments in response to other high-value comments indicate a healthy forum. Make any upvote or downvote on a child comment also upvote or downvote the parent comment it is replying to. So you’d get a bonus of upvotes if someone replies to your upvoted comment with another upvoted comment, but people would be hesitant to reply anything to comments being downvoted since their replies probably wouldn’t be upvoted to keep the parent comment from getting upvotes.
This might be a thing that needs to be plugged in a forum from the start and let the local culture form around it instead of dropping it in an existing forum. Also there might be the problem that votes tend to degenerate to track consensus agreement instead of comment quality, and this system might exacerbate some groupthink failure modes.
We could test this theory by using it on the existing data and selecting the best comments under this theory. I would be interested in reading the “top 20 (or 50) LW comments ever” found by this algorithm, posted as a separate article. It could give us an approximate idea of what exactly the new system would incentiize.
Is there a good canonical source for all LW comments ever? I’m interested in importing the data into Python and playing around with ranking algorithms. (I’m not sure what disclaimer to use to keep others from not doing the same just because I publicly said that I’m interested in it, but yeah, feel free to duplicate work and come up with other interesting analyses)
It’s probably doable to use those to scrape comments and put them into some kind of list or database, but spending time looting LW comments that way seems like wasted effort compared to getting a full dump from an official source.
Ask a lot of good questions so that other people do the real work and say lots of stupid shit to people you don’t like. Would this be sufficient to game this system? :)
Someone should make a top level post introducing a contagious meme for upvoting good questions. Good questions are upvoted too rarely, but I don’t think we need software to fix it.
Absolutely. For my own part, I find that getting my good questions answered provides sufficient incentive, but I’m willing to believe that I’m atypical in that respect and further incentive above that up to a threshold would be beneficial,, and I have no idea where we are relative to that threshold.
I don’t really think this is relevant to LessWrong per se, but I’m wondering if any smart folks here have attempted to solve this “internet mystery”:
The internet mystery that has the world baffled: For the past two years, a mysterious online organisation has been setting the world’s finest code-breakers a series of seemingly unsolveable problems. But to what end? Welcome to the world of Cicada 3301
Why would you want to do that? If you want to make a guarantee that the comment isn’t edited later, that happens automatically: comments get a little asterisk by the date if they’ve been edited. Or you can ask people to reply to the comment quoting it. If multiple people do so, that’s a good guarantee it hasn’t been edited later.
Um, you know you can reply to comments directly here? In the lower right hand of a comment there’s a bunch of buttons, the most left which is the reply button.
I was chatting with Toby Ord recently about a series of events we think we’ve observed in ourselves:
In analyzing a how the future might develop, we set aside some possible scenarios for various reasons: they’re harder to analyze, or they’re better outsourced to some domain expert at Cambridge, or whatever.
Some time passes, and we forget why we set aside those scenarios.
Implicitly, our brains start to think that we set aside those scenarios because they were low probability, which leads us to intuitively place too much probability mass on the scenarios we did choose to analyze.
I wish I had an intuitive name for this, which made analogy to some similar process, ala “evaporative cooling of group beliefs.” But the best I’ve heard so far is the “pruning effect.”
It may not be a special effect, anyway, but just a particular version of the effect whereby a scenario you spend a lot of time thinking about feels intuitively more probable than it should.
Well, it’s a pretty clear instance of the availability heuristic.
Virtue ethics versus consequentialism: The Neuroscientist Who Discovered He Was a Psychopath
I had actually been wondering about this recently. People define a psychopath as someone with no empathy, and then jump to “therefore, they have no morals.” But it doesn’t seem impossible to value something or someone as a terminal value without empathizing with them. I don’t see why you couldn’t even be a psychopath and an extreme rational altruist, though you might not enjoy it. Is the word “psychopath” being used two different ways (meaning a non-empathic person and meaning a complete monster), or am I missing a connection that makes these the same thing?
You don’t notice someone has no empathy until you see them behaving horribly. The word is being used technically to refer to a non-empathic person, but people assume that all non-empaths behave horribly because (with rare exceptions like this neuroscientist) all the visible ones do.
I hadn’t realized it before, but the usual take on non-empathic people—that they will treat other people very badly—implies that most people think that mistreating people is a very strong temptation and/or reliably useful.
The only acquaintance I’ve had who was obviously non-empathic appeared to be quite amused by harming people, and he’d talk coldly about how it would be more convenient for him if his parents were dead. If I were a non-empathic person who’d chosen a strategy of following the rules to blend into society, I would find it very inconvenient for people to think I was anything like him, and would therefore attempt to emulate empathy under most conditions. Who would want to cooperate with me in a mutually profitable endeavor if they thought I was the kind of person who would find it funny to pour acetone on their pants and then light it on fire? Having people shudder when they think of me would be a disadvantage in many careers.
This creates a good correlation between visible non-empathy and mistreating people without requiring a belief that mistreating people is generally enjoyable or useful.
Except that it does suggest that mistreating people is fun for at least some people.
Killing people in a computer game is fun for many people.
Without empathy, anything you do with other people is pretty much a game. Finding a way to abuse a person without being punished for it, is like solving a puzzle. One could move to more horrible acts simply as a matter of curiosity; just like a person who completed a puzzle wants to try a more difficult puzzle.
(By the way, this discussion partially assumes that psychopaths are exactly like neurotypical people, just minus the empathy. Which may be wrong. Which may make some of our conclusions wrong.)
One of the principles of interesting computer games is that sometimes a simple action by the player leads to a lot of response from the game. This has an obvious application to why hurting people might be fun.
No it isn’t. Why don’t you try to crawl out of your typical mind space for a moment?
That’s because it usually has good consequences for the player, the violence is cartoony, and NPCs don’t really suffer. You could be an incredibly unempathethic person, and still not find hurting real people fun even in the gut level because it has so many other downsides than your mirror neurons firing.
I myself possess very little affective empathy, and find people suggesting that I therefore should be a sadist pretty insulting (and unempathetic). I’m also a doctor, so you people should tremble in fear for my patients :)
It’s wrong.
Well yes, it’s clearly fun for at least some people. It’s just that the observations do not require anyone to think that mistreating people is strongly tempting for many, most, or all people, which is how I read your comment above.
That’s exactly how I approach the situation. I find the claim that I can’t be moral without empathy just as ridiculous as you would find the claim that you can’t be moral without believing in god. I also find moral philosophies that depend on either of them reprehensible. Claiming moral superiority because of thoughs or affects that are easy to feign is just utter status grabbing in my book.
Why does it seem you’re still confusing sadism and nonempathy although you seemed to untangle them in the grandparent?
Imagine that you find $1000 on the street. How much would you feel tempted to take it?
Imagine that you meet a person who has $1000 in their pocket. Assuming that you feel absolutely no empathy, how much would you feel tempted to kill the person and take their money? Let’s assume that you believe there is almost zero chance someone would connect you with the crime—either because you are in a highly anonymous situation, or because you are simply too bad at estimating risk.
Not very tempted, actually. In this hypothetical, since I’m not feeling empathy the murder wouldn’t make me feel bad and I get money. But who says I have to decide based on how stuff makes me feel?
I might feel absolutely nothing for this stranger and still think “Having the money would be nice, but I guess that would lower net utility. I’ll forego the money because utilitarianism says so.” That’s pretty much exactly what I think when donating to the AMF, and I don’t see why a psychopath couldn’t have that same thought.
I guess the question I’m getting at is, can you care about someone else and their utility function without feeling empathy for them? I think you can, and saying you can’t just boils down to saying that ethics are determined by emotions.
I think that ethics, as it actually happens in human brains, are determined by emotions. What causes you to be an utilitarian?
There’s more to it than that. How about upbringing and conditioning? Sure, it made you feel emotions in the past, but it probably has a huge impact on your current behaviour although it might not make you feel emotions now.
Caring about someone else’s utility function is practically the definition of empathy.
Nitpick: I’ve seen a distinction between affective empathy (automatically feeling what other people feel) and cognitive empathy (understanding what other people feel), where the former is what psychopaths are assumed to lack.
In practice, caring without affective empathy isn’t intuitive and does take effort, but that’s how I view the whole effective altruism/”separating warm fuzzies from utilons” notion. You don’t get any warm empathic fuzzy feelings from helping people you can’t see, but some of us do it anyway.
That’s the way I care and try to care about people.
This is a valid point and it actually makes my statement stronger. Simply understanding what people like/dislike may not be considered ‘true empathy’, but caring about what they like/dislike certainly is.
If I make chicken soup for my friend when he’s sick, and then I feel good because I can see I’ve made him happy, that’s empathy. If I give $100 to a charity that helps someone I will never see, that’s not empathy. The reward there isn’t “I see someone happy and I feel their joy as my own.” It’s knowing abstractly that I’ve done the right thing. I’ve done both, and the emotional aspects have virtually nothing in common.
All forms of empathy must necessarily be indirect. When you see your friend happy, you don’t directly percieve his happiness. Instead, you pick up on cues like facial expression and movements. You extract features that correspond to your mental model of human happiness. Let me make this clear and explain why it’s relevant to the discussion.
Let’s say your friend is asleep. You make him friend chicken soup, leave it on the table, and go to work. He later sends you a single text, “Thanks, the chicken soup made me really happy.” This puts a smile on your face. I’m pretty sure you would consider that the first form of empathy, even though you never saw your friend happy. Indeed, the only indication of his happiness is several characters on a phone display.
Now let’s take this further. Let’s say every time you make your friend chicken soup it makes him happy, so that you can predict with confidence that making him chicken soup will always make him happy. Next time you make him chicken soup, do you even need to see him or get a text from him? No, you already know it’s making him happy. Is this type of empathy the first kind or the second kind?
I’d call it the first kind, because it actually causes warm-fuzzy-happy feelings in me. My emotion reflects the emotion I reasonably believe my friend is feeling. Whereas the satisfaction in knowing I have done the right thing for someone far away whom I don’t know and will never meet is qualitatively more like my satisfaction in knowing that my shoes are tied symmetrically, or that the document I have just written is free of misspellings. I’ve done The Right Thing, and that’s good in an abstract aesthetic way, but none of my feelings reflect those I would believe, on reflection, that the recipient of the good deed would now be feeling. It doesn’t put a smile on my face the way helping my friend does.
Well, what you say you feel is subjective (as is what I say I feel) but when I personally donate to charity it’s because helping people—even if I don’t directly see the results of my help—makes me happy. If not the ‘warm fuzzy feeling’, at least a feeling comparable to that of helping my friend. That is my subective feeling.
Nah, you can care about someones utility function instrumentally. In fact I think that’s the way most people care about it most of the time, and have no reliable evidence to suggest otherwise.
I meant ‘caring’ as in direct influence of their utility on your utility (or, at least, the perception of their utility on your utility), conditionally independent of what their utility results in. If you take ‘care’ to simply mean ‘caring about the outcomes’ then yes you’re right. Saying that all people are that way seems quite a strong statement, on par with declaring all humans to be psychopaths.
So you meant instrumentally in the first place. I misuderstood you, so retracted both the comment and the downvote.
Definitely not. Psychopaths are far more anomalous than selfish people. Also, I said most people most of the time, not all people all the time.
I suppose the word ‘psychopath’ is itself problematic and ill-defined, so fair enough.
They could. But if you select a random psychopath from the whole population, what is the probability of choosing an utilitarian?
To be afraid of non-empathic people, you don’t have to believe that all of them, without an exception, would harm you for their trivial gain. Just that many of them would.
You would also have to know in what proportion they exist to know that, and you don’t have that information precisely because of such presumptions. You wouldn’t even know what’s normal if displaying certain qualities is useful enough, and detecting whether people really have them isn’t reliable enough.
It’s possible to steelman that hypothetical to the threshold that yeah, killing someone for their money would be tempting. It wouldn’t have much resemblance to real life after that however.
There are several other reasons not to kill someone for their money than empathy, so I’m not sure how your hypothetical illustrates anything relevant.
Conversely, I used to assume that having empathy implied treating others well—that all people who were especially empathetic also wanted to be nice to people.
The nearest term used in contemporary psychiatry is antisocial personality disorder. AFAIK some forensic psychiatrists use the term psychopath, but the criteria are not clear and it’s not a recognized diagnosis. Forget about the press the term gets.
Lack of empathy certainly isn’t sufficient for either label, and can be caused by other psychiatric conditions.
I don’t understand that connection you made. Care to explain?
You can’t determine someone is a psychopath via a brain scan yet. You can’t even determine someone has Alzheimer’s with only a brain scan, even though it’s pretty well understood which brain regions are damaged in the disease. Psychopathy is a syndrome, and still quite poorly understood. Note also that there would be significant problems with testing if he went to a psychiatrist after knowing his scan results.
I think that neuroscientist is just trying to make money by claiming he’s a psychopath, which of course would be quite a psychopathic thing to do :)
It struck me as relevant to the philosophical question: here’s someone who has had to think hard about “what makes a psychopath or sociopath?” He is in his social actions a reasonably normal and productive citizen, but worries about how much of a dick he can be, including potentially.
Hormones and personality
Account by someone who’s highly dependent on prescription hormones to function, some description of the difficulties of finding a doctor who was willing to adjust the hormones properly, a little about the emotional effects of the hormones, and a plea to make hormones reliably available. It sounds like they’re almost as hard to get reliably as pain meds.
Society, and even some aspects of our medical system, are fond of the naturalistic fallacy. I wonder, if we have this much trouble in situations where the treatment simply returns someone to the base rate, how much of a reaction is there going to be in a few decades when the more directly transhuman modifications start coming online?
Returning to the base rate is cheaper and more egalitarian, so what seems like naturalistic fallacy necessarily isn’t. In a completely private health care system these things wouldn’t matter though.
I don’t get it. Why would she need to deal with dozens of people to get meds she clearly needs for a single condition? My heuristics point at her version of the story leaving out something important.
Alternatively you really do have a screwed up health care system in the US.
Most of these drugs are pretty well-known. Hydrocortisone (or prednisone) is probably the most common and easiest drug, both cheap and commonly prescribed as a general immunosuppressant. The thyroid drugs (probably thyroxine and liothyronine) have a number of on-label uses that could be coherently stretched to cover this particular condition, and are common enough to be in the average pharmacy network. There’ll be some hesitancy to mess with doses heavily—especially after you achieve basic functioning—because of the high risk of adrenal shock, something that the author experienced in at least one high-profile incidence. In women, a combination of estrogen-progesterone therapy is recommended, and not that dissimilar from the Pill except opposite in effect.
But that’s not a dozen drugs, and that’s about the full scale of well-documented treatment. There’s not much literature on the use of testosterone in women, for example, and I can think of a half-dozen neurochemicals she might be pioneering. There are endocrinologists that enjoy working at the frontier of drug discovery. There aren’t a huge number that do so, but have patients that walk on two legs and are known for food preferences other than cheese.
There are also secondary issues. The drug industry has some severe logistics issues, resulting in many drug shortages. One of the most common thyrioxine supplements has been on back-order since, and is scheduled to stay that way til 2014 after a rather goofy recall. This isn’t unique to hormones (although the levothyroxine example is especially ridiculous), but it matters.
That’s true, I missed the sentence about a dozen drugs. Keep in mind though she might not take all of them exclusively for that particular condition.
I would be interested if you named a few, and whether there’s any evidence of their usefulness.
If that’s the case the question becomes should she really be allowed to do that. I have no problem with that if the system allows for the patient being completely responsible for taking those drugs, but I don’t think any doctor or insurance company should be expected to take the fall for her. If the drug isn’t well documented and she doesn’t take part in a trial, I think she should finance treatment for any complications herself, and that could easily get more expensive than she can afford.
Her complaints are not the usual ones about the American system. What leads you to believe that it would be different elsewhere? The insurance companies add a gatekeeper, but it’s not the only one.
If she complained that the system simply denied her what she wanted, I’d find it quite plausible that there is more to the story. But instead she says that it is a struggle every month. What could be the other side to that story?
I only have a really vague idea of what those are.
I’m a doctor from Finland, and don’t think she would have similar problems here, if she really needed those hormones. Does something lead you to believe it wouldn’t be different elsewhere?
That’s what I find weird too. I can’t imagine why that would happen, at least in my country. I’ve never even heard of the kinds of complaints she has.
I don’t know, I’m not an endocrinologist. A wild guess amongst many: doctors really do have good reasons to believe she doesn’t need the amounts of hormones that she thinks she does (placebo), or even that they’re harmful. She could for example have a psychiatric diagnosis or lots of previous pointless visits recorded, which would strengthen this suspicion further. She would have to do doctor shopping to get her prescriptions renewed, which would make it a constant struggle.
I also find it weird the diagnosis took years when she had that severe symptoms.
I haven’t had much dealings with the medical profession, but her story didn’t seem wildly implausible to me—I have friends who’ve had a hard time, though not that bad, getting the care they need.
Anyone else care to weigh in on whether her story seemed implausible to them, or plausible but at the low quality medical care end of the spectrum?
From my POV, patients quite commonly have misconceptions about what conditions they have, what lead to their diagnosis, what treatments they’re getting or why and especially why they’re denied treatment. She seems reasonably intelligent and educated, so this is a bit less probable.
By the system I would mean the whole system consisting of communication between doctors and their colleagues and pharmacies, continuity of electronic patient histories, insurances, reimbursements and prescription policies etc. I’m saying that the problem could be in some of those parts, just to be clear.
What I find more interesting is how she had to explore hormone space to find the combination that was her. Reminds me of the experiences of some transgender people I know.
Moldbug weighs in on the Techcrunch thing, with words on LW:
We’re so humorless that our primary piece of evangelical material is a Harry Potter fanfiction.
It’s “humorless” that hurts the most, of course.
Out of interest, does anyone here have a positive unpacking of “wisdom” that makes it a useful concept, as opposed to “getting people to do what you want by sounding like an idealised parental figure”?
Is it simply “having built up a large cache of actually useful responses”?
Paul Graham has takes a stab at it:
Wisdom seems to be basically successful pattern matching of mental concepts to situations, and you need life experience as the training data for mental concepts, the varieties of situations, and the outcomes of applying different concepts to different situations to get it running at the sort of intuitive level you need it.
I think Moldbug is somewhat on target, LW doesn’t really have much in the way of either explicitly cultivating or effectively identifying the sort of wisdom that lets you produce high-quality original content, beyond the age-old way of hanging around with people who somehow can already do it and hoping that some of it rubs off. So we get people adopting the community opinions and jargon, getting upvotes for being good little redditors, not doing much else, and thinking that they are gaining rationality. We haven’t managed to get the martial art of rationality thing going, where there would be a system in place for getting unambiguous feedback on your actual strength of wisdom.
Prediction markets are one interesting candidate for a mechanism for trying to measure the actual strength of rationality.
In this case he could not be farther off target if he tried. Yvain’s writings are some of the best, most engaging, most charitable and most reasonable anywhere online. This is widely acknowledged even by those who disagree with him.
Unfortunately most of Less Wrong is non-Yvain.
The point is not even Yvain’s writings are high-quality enough, in Moldbug’s view.
When I was very young, I had a funny idea about layers of information packing.
“Data” is raw, unfiltered sensory perception (where “senses” include instruments/etc.) “Information” is data, processed and organized into a particular methodology. “Knowledge” is information processed, organized and correlated into a particular context. “Intelligence” is knowledge processed, organized and correlated into a particular praxis. “Wisdom” is intelligence processed, organized and correlated into a particular goalset. “Enlightenment” is wisdom processed, organized and correlated into a particular worldview.
I never rigorously defined what the process was to my own satisfaction, but there seemed to my young mind to be an isomorphic ‘level-jumping’ process between each layer that involved processing, organizing and correlating one’s understanding at the previous layer.
In my own head, I mostly unpack “smart” as being able to effectively reason with a given set of data, and “wise” as habitually treating all my observations as data to reason from. Someone with a highly compartmentalized mind can be smart, but not wise. If (A → B) but A is not actually true, someone who is smart but not wise will answer B given A where someone wise will reject A given A.
That said, this seems to be an entirely ideosyncratic mapping, and I don’t expect anyone else to use it.
It is interesting how similar in style and thought patterns this is to the far left rant about LW from a few months ago.
In what way?
Cult accusations, criticism by way of comparison to things one doesn’t like simply because they bear similar names, use of ill-defined terms as part of that criticism, bizarre analogies to minimally historically connected individuals (Shabtai Tzvi? Seriously? Also does Moldbug realize what that one particularly sounds like given Eliezer’s background?), phrasing things in terms of conflicts of power rather than in terms of what might actually be true, operating under the strong presumption that people who disagree with one are primarily motivated by ulterior motivations rather than their stated ones especially when those ulterior motivations would support one’s narrative.
FWIW Moldbug is Jewish too.
I was right
Starting today, Monday 25 November 2013, some Stoic philosophers are running “Stoic Week”, a week-long mass-participation “experiment” in Stoic philosophy and whether Stoic exercises make you happier.
There is more information on their blog.
To participate, you have to complete the initial exercises (baseline scores) by midnight today (wherever you are), Monday 25 November.
Recently, a planetary system similar to our own solar system was found. This is one of the first cases where one has rocky planets near the star and large gas giants farther away, like our own solar system. Unlike our system, this system apparently has everyone fairly close in, with all the planets closer to the star than the Earth is to the sun.
In case anyone’s wondering, it looks as if the star is comparable in size and luminosity to ours, so this system probably isn’t any more hospitable to Earth-like life than our solar system would be if all the planets were moved much closer in.
You know I really do feel like I am clinging bitterly to my priors and my meta at this point as I joked on twitter recnetly. I knew this was inevitable should our presence ever be noticed by anyone actually important like a journalist. What I didn’t know was it would still hurt.
This emotion shows in my reply. Should delete it?
You shouldn’t be upset by the initial media coverage, and I say this as someone who doesn’t identify with neo-reactionary thought. Attacking new social movements is NOT inevitable. It is a sign of growth and a source of new adherents. Many social movements never pick up enough steam to receive negative coverage, and those movements are ineffective. Lots of people who have never heard of neo-reactionaries will read this article, note that parts of it are pretty obvious bullshit (even the parts that are intended to be most negative; lots of people privately believe that IQ and race are connected even if they are publicly unwilling to say anything of the sort), and follow the links out of interest. There are many very smart people that read TechCrunch, and don’t automatically agree with a journalist just because they read an article. Obviously this is bad for Peter Thiel, who is basically just collateral damage, but it’s most definitely good for neo-reactionaries.
Gandhi’s famous quote (“First they ignore you, then they laugh at you, then they fight you, then you win.”) is accurate as to the stages that a movement needs to pass through, although obviously one can be stopped at any given stage. I think we are already seeing these stages play out in the Men’s Rights movement, which is further along the curve than neo-reaction.
Clinging bitterly to your priors and your metta sounds like a sign you should update, and that’s more important than deleting or not deleting a blog comment.
As for your comment, first two paragraphs are fine, perhaps even providing helpful clarification. The sarcasm in the second paragraph is probably unhelpful, though, maybe just edit the comment.
I don’t think you should. But maybe this is because I feel the same way (;_;) despite being just someone who endorses HBD and dislikes Progressivism but thinks Moldbug wrong. I like this comment you made elsewhere much better than the one you linked to though:
We’ve been noticing this process for a long time now. I now think I was wrong on this in the past. This should be a sign for what you call the “outer right” that we will only inflame the now inevitable escalation of status warfare, as social justice debates hijack attention away from human rationality to value and demographic warfare and people like us are systematically excluded from the intended audience. An explanation of some related costs for those who can’t think of them. I think your and Anissimov’s site More Right makes a nice Schelling point to regroup and continue our exploration of human rationality applied to controversial topics.
What’s HBD?
HBD = Human BioDiversity, a line of thought which asserts that humans are significantly different genetically. Often called “racism” by people who don’t like it.
To be more clear, HBDers claim that not just that humans differ significantly at a genetic level (that’s pretty uncontroversial: I don’t think anyone is going to argue that genetically inherited disease aren’t a thing for example). As far as I can tell, the HBDers believe that most or almost all of mental traits are genetically determined. Moreover, HBDers seem to generally believe that these genetic traits are distributed in the population in ways that closely match with what are normally seen as ethnic and racial groups, and that that explains most of racial differences in IQ scores, life success, and rates of criminal activity.
Are you bitter about the journalist or about Eliezer?
Would you have written a response just to the journalist?
What is neoreactionaryism?
The anti-reaction FAQ describes it as “Neoreaction is a political ideology supporting a return to traditional ideas of government and society, especially traditional monarchy and an ethno-nationalist state. It sees itself opposed to modern ideas like democracy, human rights, multiculturalism, and secularism. ” As far as I’m aware, neoreactioaries do not object to that description.
I feel this is a stupid question, but I’d rather ask it than not knowing: Why would anyone want that? I can understand opposing things like: democracy, secularism and multiculturalism, but replacing them with a traditional monarchy just doesn’t seem right. And I don’t mean morally, I just don’t see how it could create a working society.
I can fully understand opposing certain ideas, but if you’re against democracy because it doesn’t work, why go to a system of governance that has previously shown not to work?
If you accept the criticism it makes of democracy you are already basically Neoreactionary. Only about half of them advocate monarchy as what should replace our current order, remember no one said the journalist did an excellent job reporting about us. While I can’t even speak for those who do advocate Monarchy, only for myself, here some of my reasons for finding it well worth investigating and advocating:
Good enough—You need not think it an ideal form of government, but if you look at it and conclude it is better than democracy and nearly anything else tried from time to time so far, why not advocate for it? We know it can be done with humans and can be stable. This is not the case with some of the proposed theoretical forms of government. Social engineering is dangerous, you want fail safes. If you want to be careful and small c-conservative it is hard to do better than monarchy, it is as old as civilization, an institution that can create bronze age empires or transform a feudal society into an industrial one.
Simplicity—Of the proposed other proposed alternative forms of governments it is the one most easily accurately explained to nearly anyone. Simplicity and emotional resonance are important features with many consequences. For example when Moldbuggians say society would benefit from formalization they should aim for a bare bones OS for this to be feasible. Formalization is the process where the gap between the actual and claimed functioning of a social institution is closed as much as possible in order to reduce disinformation. This is considered good because uncertainty results in politics/war. There are also costs for keeping people in positions of responsibility sane and not accidentally ending up believing in disinformation if such is common around them. Not bothering to keep them sane at all seems bad.
Agile experimentation—Social experimentation is however useful, especially since the same solutions won’t work for all kinds of societies in all situations. It is a system that can be easily adjusted for either robustness or flexibility as needed. A monarch has simple logistics to set up or allow social experiments. Futarchy, Neocameralism… why risk running a society on this OS rather than set up a more robust one and then test it within its confines? East India Companies, Free Cities, religious orders are common in the history of Western monarchy. Indeed you can look at Constitutional Monarchy in modern democratic countries as experiments that where either judged successful or an experiment that breached containment. Even in this case of breach the form of monarchy was still preserved however and might possibly be revived at a future point in time.
Responsible ideology crafting—Many Neoreactionaries think the relative political stability of the Western world of the past 70 years will not last. Historically transition from some kind of republic to military dictatorship is common. Rule by leader of victorious conquering army, has historically show successful transition to monarchy, as all dynasties where basically founded by them. Even if in itself such a change isn’t likely in the West, the unlikely situations where neoreactionary criticism of democracy would be taken seriously and guide policy, is one where the most likely victor of the social instability is not an ideal representation of a Neoreactionary CEO philosopher but a military dictator. We should try and plan social reform constrained by logistics of the likeliest outcome of our ideas becoming important, otherwise we are irresponsible. Indeed this might have been the grand crime of Communist theorists.
Low Hanging Fruit—It has been understudied by modern intellectuals who furthermore are biased against it. Compare how much modern theoretical work has been done on Democracy vs. Monarchy. See number wikipedia articles for a quick proxy. This is perhaps practical given the situation we find ourselves in but also somewhat absurd. For example as far as I’m aware no one outside reaction has in depth considered the ridiculously obvious idea of King as Schelling Point! Modern game theory, cognitive science and even sociology unleashed on studying monarchy would reveal treasures, even if we eventually decide we don’t want to implement it.
That sounds like a hell of a package deal fallacy to me.
I was trying to say Neoreactionaries basically only strongly agree on these criticisms, not the particular solutions how to ameliorate such problems. I hope that is apparent from the paragraph?
How are you going to distinguish them from conservo-libertatians, then? I would imagine they would also agree with much of those criticisms and will disagree as to the proposed solutions.
They don’t use the particular concepts of Neoreaction, things like the Cathedral or the idea Progressivism is the child of Protestant Christianity or why it drifts leftwards. There will be no clear line as both conservo-libertarians and anarcho capitalists are big inspirations to the neoreactionary world view and form a big part of its bedrock. It is observed many reactionaries tend to be ex-libertarians.
I was under the impression that they also tend to agree about certain social issues such as traditional gender roles (though after posting that comment I found out that Moldbug agrees with progressive views about homophobia); am I wrong?
What is the package?
Isn’t “oppose democracy for a specific set of reasons” a natural category?
Added, based on your other comment: “skepticism against Progressive Orthodoxy” is a lot weaker than opposing democracy.
Neoreaction is basically defined as “these particular criticism of Progressivism & Democracy”! I’m not sure you will find common agreement among neoreactionaries on anything else.
And if we accept the Reactionary criticisms of democracy and the Progressive criticisms of aristocracy and monarchy? What then?
Then you get to happily look down on everyone’s naive worldviews until you realize that world is fucked and go cry in a corner.
Been there, done that, realized that crying won’t make the world any less fucked, come back from the corner.
Psychosocial development of puberty in a nutshell?
Doesn’t reactionary or progressive criticism in itself if taken seriously already do this?
Then you either throw up your hands and go meta with secession/seasteading/etc. or try to find existing systems that neither of those systems would apply to… how about Switzerland?
I am curious why Switzerland isn’t more popular among people who want to change the political system. It has direct democracy, decades of success, few problems...
The cynical explanation is that promoting a system someone else invented and tested is not so good for signalling.
The correct question is whether Switzerland’s success is caused by its political system. If not, emulating it won’t help.
We can at least be sure that Switzerland’s success hasn’t been prevented by its political system. This isn’t a proof that the system should be copied, but it’s at least a hint that it should be studied.
Switzerland is pretty small, and it’s not obvious to me that its political system would scale well to larger countries. But then again, it’s not obvious to me that it wouldn’t, either.
My very superficial knowledge says that Switzerland consists of relatively independent regions, which can have different tax rates, and maybe even different laws. These differences allow people to do some lower-scale experiments, and probably allow an individual to feel like a more important part of the whole (one in a few thousands feels better than one in a few millions). I would guess this division to regions is very important.
So a question is, if we wanted to “Switzerland-ize” a larger country, should we aim for the same size (population) or the same number of regions? Greater region size may reduce the effect of an individual feeling important, but greater number of regions could make the interactions among them more complicated. Or maybe the solution would be to have regions and sub-regions, but then it is not obvious (i.e. cannot be copied straightforwardly) what should be the power relationship between the regions and their sub-regions.
It would be safer to try this experiment first in a country of a similar size. Just in case some Illuminati are reading this discussion, I volunteer Slovakia for this experiment, although my countrymen might disagree. Please feel free to ignore them. :D
Reminds me of some large countries… in North America, I think? :-)
For various levels of superficiality, yeah.
Then again, population-wise it’s bigger than reactionary poster children such as Singapore or Monaco and comparable to progressivist poster children such as Sweden or Denmark.
Always go meta. I feel like an addict saying that.
I want to emphasize again monarchy only recently gained popularity among neoreactionaries, its possible the majority of them still dream of Moldbug’s SovCorps. Anarcho-Papist for example basically believes anarcho-capitalism is best but thinks the Neoreactionary analysis of why society is so leftist is correct.
You make incremental patches and innovations in the existing setup, and keep a very close eye on the results.
Somebody’s mind explodes :-D
The popularity of aristocratic and monarchist stories in popular culture—Star Wars, LOTR, The Tudors, Game of Thrones, possibly Reign if its rating improve, etc. - says something about the human mind’s “comfort” with this kind of social organization. David Brin and similar nervous apologists for democracy have that working against them.
The obvious question here is, why do you think monarchy has been “shown not to work”? Is it because monarchies have had a tendency to turn into democracies? Or perhaps because historical monarchies didn’t have the same level of technology that modern liberal democracies enjoy?
That question is kinda obvious. Thanks for pointing it out.
From what I remember from my History classes, monarchies worked pretty okay with an enlightened autocrat who made benefiting the state and the populace as his or her prime goal. But the problem there was that they didn’t stay in power and they had no real way of making absolutely sure their children had the same values. All it takes to mess things up is one oldest son (or daughter if you do away with the Salic law) who cares more about their own lives than those of the population.
So I don’t think technology level plays a decisive factor. It probably will improve things for the monarchy, since famines are a good way to start a revolution, but giving absolute power to people without a good fail-safe when you’ve got a bad ruler seems like a good way to rot a system from the inside.
I was in a Chinese university around Geoge W. Bush’s second election and afterwards, which didn’t make it easy to convince Chinese students that Democracy was a particularly good system for picking competent leaders (Chinese leaders are often graduates from prestigious universities like Tsinghua (where I was), which is more like MIT than like Yale, and they are generally very serious and competent, though not particularly telegenic). On the other hand, the Chinese system gets you people like Mao.
I don’t think Mao could exactly be said to be a product of the Chinese system, seeing as unless you construe the “Chinese system” to include revolutions, it necessarily postdates him.
I totally agree, and in addition, Mao is the kind of leader that could get elected in a democracy.
However, a democracy may be getting rid of someone like Mao than China was (provided the democracy stats).
I’m not necessarily saying that democracy is the best thing ever. I just have issues jumping from “democracies aren’t really as good as you’re supposed to believe” to “and therefore a monarchy is better.”
I feel I should point out the Chinese system was not what got Mao into power. Instituting the Chinese system is what got him into power. And this system saw massive reform since then.
Bullets 5 and 6 of this MoreRight article point out some reactionary ideas to assuage your concerns. Like Mr. Anissimov notes, it is necessary not only to consider the harm such a failure mode might cause, but also to compare it to failure modes that are likely to arise in demotist systems. Reactionary thought also includes the idea that good systems of government align their incentives such that the well-being of their ruler coincides with that of their people, so a perfectly selfish son should not be nearly as much of a concern as an stupid or evil one.
Picture an alternative Earth Prime where monarchies dominated the political landscape and democracies were seen as inconsequential political curiosities. In this Earth Prime, can you not imagine that textbooks and teachers might instead point out equally plausible-sounding problems with democracy, such as the fact that politicians face selection pressures to cut off their time horizons around the time of their next election? Can you not imagine pointing to small democracies in their world with failures analogous to failures of democracies in our world, and declaring “Q.E.D.”? How sure are you that what you are taught is a complete and unbiased analysis of political history, carried out by sufficiently smart and rational people that massive errors of interpretation are unlikely, and transmitted to you with high fidelity?
I don’t think you have to be (certainly I am not,) not to put much credence in Reaction. From the premise that political history is conventionally taught in a biased and flawed manner, it does not follow that Reaction is unbiased or correct.
The tendency to see society as being in a constant state of decline, descending from some golden age, is positively ancient, and seems to be capable of arising even in cases where there is no real golden age to look back on, unless society really started going downhill with the invention of writing. There is no shortage of compelling biases to motivate individuals to adopt a Reactionary viewpoint, so for someone attempting to judge how likely the narrative is to be correct, they need to look, not for whether there are arguments for Reaction at all, but whether those arguments are significantly stronger than they would have predicted given a knowledge of how well people tend to support other ideologies outside the mainstream.
Of course not; even if you reject the current conventional narrative, it still takes a lot of evidence to pinpoint Reaction as a plausible alternative (nevermind a substantially correct one). But Mathias was basically saying that the models and case studies of monarchy he studied in his history classes provided him with such a high prior probability that monarchy “doesn’t work” that he couldn’t imagine why anybody could possibly be a monarchist in this day and age. I was arguing that the evidence he received therein might not have been quite as strong as he felt it to be.
At the given time, they were replaced by democracies with the same technology level they had.
The argument could be constructed that for different levels of technology, different form of government is optimal. Which sound plausible. For a very low technology level, living in a tribe was the best way of life. For higher level, it was a theocracy or monarchy. For yet higher level, it was a democracy (and this is why the old monarchies are gone). And for even higher level (today in the first world), it is monarchy again.
It’s a bit suspicious that the monarchy is the optimal form of government twice, but not impossible. (Although it is better to have opinions because most evidence points towards them, not merely because they are not completely impossible.)
That response is nonsense, an unfair reading. Jaime already offered your hypothesis immediately preceding:
He explicitly says that means something completely different.
I imagine that he means, quite correctly, that many comparisons between democracies and monarchies fail to compare examples at the same technology level.
As to the other point, I doubt Jaime thinks that monarchies turning into democracies is a very good argument in favor of democracies, just that it is a common implicit argument. I doubt that there are many people who think that monarchy is a good form of government at two technological levels, separated by democracy. Generally people who condemn democracy think that it was a mistake, perhaps historically contingent, or perhaps a natural tendency of technology, but one to be fought. Some reactionaries hold that this is a good time to pursue non-democracies, but usually because democracy is finally self-destructing, not because technological pressures have reversed course.
But monarchies turning into democracies is evidence against the stability of monarchies, and some reactionaries do implicitly make the argument that technology favors monarchy in two different periods.
Because you are so incredibly smart that today you will get everything right, and those old mistakes done by lesser minds are completely irrelevant...?
Maybe it’s not about people really wanting to live under some majesty’s rule, but about an irresistable opportunity to say that you are smarter than everyone else, and you have already found a solution for all humanity’s problems.
(This was originally my observation of Communists of the smarter type, but it seems to apply to Neoreactionaries as well.)
Read ten pages of “Democracy: the God That Failed” and see if you still feel that there’s so little substance to what we believe.
Even before reading it, I already agree that democracy does not work the way people originally thought it would, and some pretend it works even today. (People voting to get money from their neighbors’ pockets. Idiots who know nothing and want to learn nothing, but their vote is just as important as Einstein’s. Media ownership being the critical factor in elections.)
That just doesn’t give me enough confidence that my solution would be better. Let’s say it would avoid some specific problems of democracy successfully. How about new problems? (Or merely repetition of the old ones, enhanced by the modern technology.)
Einstein was a physicist. He probably had more sense about politics than random inattentive person who votes on the basis of emotion, but I’m going to hope that people who actually know something about politics get influence by writing and/or politicking. Their influence isn’t limited to their vote.
In fact, Einstein was pretty politically active and influential, largely as a socialist, pacifist, and mild Zionist.
To quote myself on what I consider is plausibly better than democracy:
Neocameralism in paritcular is something that is possibly still more popular among Neoreactionaries than democracy. Here I briefly explain it:
Well, the neoreactionaries claim that strong monarchies will be more stable, and less subject to needing to satisfy the fickle whims of the population. There is some validity to at least part of the argument: long-term projects may do better in dictatorships. Look for example at the US space program: there’s an argument that part of why it has stalled is that each President, desiring to have a long-lasting legacy, makes major changes to the program’s long-term goals, so every few years a lot of work in progress is scrapped. Certainly that’s happened with the last three Presidents. And the only President whose project really stayed beyond his office was JFK, who had the convenience of being a martyr and having a VP who then cared a lot about the space program’s goals.
However, the more general notion that monarchies are more stable as a whole is empirically false, as discussed in the anti-reaction FAQ.
What I suspect may be happening here is a general love for what is seen as old, from when things were better. Neoreaction may have as its core motivation a combination of cynicism for the modern with romanticism about the past.
If you do read any of the pro-reaction stuff linked to by K (or the steelman of reaction by Yvain) I suggest you then read Yvain’s anti-reaction FAQ which provides a large amount of actual data.
Thank you. I’ll read the FAQ, it seems exhaustive and informative.
And as I hope I made clear, I can certainly understand the notion that “democracy isn’t awesome”. But I don’t get the jump from there to “a monarchy will be better.”
Read “Democracy: The God That Failed” and “Liberty or Equality” for some basic arguments.
I object to that piece being called a “Steelman of reaction” despite Yvain’s claims in his later piece.
Do you mean that the piece does not do the best case possible, or do you mean that was it is steelmanning is not neoreaction?
Until some certified reactionary can do better....
Yvain’s anti-reaction FAQ shows nothing of the sort. It cherry-picks a few examples. To compare the stability of democracies and monarchies, a much broader historical comparison is needed. I’m working on one now, but people should really read their history. Few of those who confidently claim monarchies are unstable have more than a smidgen of serious reading on Renaissance Europe under their belts.
I look forward to you response when it is published. As of right now, that’s an assertion without data.
Here: Response to Yvain on “Anti-Reactionary FAQ”: Lightning Round, Part 2 — Austrian Edition.
Considering that your response relies heavily on deciding who is or isn’t “demotist”, it might help to address Yvain’s criticism that the idea isn’t a well-defined one. The issue of monarchs who claim to speak for the people is a serious one. Simply labeling dictators one doesn’t like a demotist doesn’t really do much. Similarly, your response also apparently ignores Yvain’s discussion of the British monarchy.
It’s just a small slice of a response, I can’t respond to everything at once...
Napoleon was a populist Revolutionary leader. That should be well-understood.
For something more substantial, try “Democracy: the God That Failed” by Hans-Hermann Hoppe.
I’m not convinced that this is a meaningful category. It is similarly connected to how you blame assassins and other issues on the populist revolutions: if historically monarchies lead to these repeatedly, then there’s a definite problem in saying that that’s the fault of the demotist tendencies, when the same things have not by and large happens in democracies once they’ve been around for a few years.
Also, while Napoleon styled himself as a populist revolutionary leader, he came to power from the coup of 18 Brumaire, through military strength, not reliance on the common people. In fact, many historians see that event as the end of the French Revolution.
While I understand that responding to everything Yvain has to say is difficult, I’d rather read a complete and persuasive response three months from now than an unpersuasive one right now. By all means, feel free to take your time if you need it.
There are three decent starting points:
The Dark Enlightenment (The Complete Series) by the British philosopher Land, 28k word count
The open letter to open minded Progressives series by Mencius Moldbug, 120k word count
Reactionary Philosophy In An Enormous, Planet-Sized Nutshell by Scott Alexander aka Yvain, 16k word count
All of these have issues, I like Nick Land’s one best, Moldbug is probably easier to read if you are used to the writing style here, Scott’s is the best writer of the three, but deficient and makes subtle mistakes since he isn’t reactionary.
My own summary of some points that are often made would be:
If you build a society based on consent, don’t be surprised if consent factories come to dominate your society. What reactionaries call the Cathedral is machinery that naturally arises when the best way to power is hacking opinions of masses of people to consent to whatever you have in store for them. We claim the beliefs this machine produces has no consistent relation to reality and is just stuck in a feedback loop of giving itself more and more power over society. Power in society thus truly lies with the civil service, academia and journalists not elected officials, who have very little to do with actual governing. This can be shown by interesting examples like the EU repeating referendums until they achieve the desired results or Belgium’s 589 days without elected government. Their nongovernment managed to have little difficulty doing things with important political implications like nationalizing a major bank.
Moral Progress hasn’t happened. Moral change has, we rationalize the latter as progress. Whig history is bunk.
The modern world allows only a very small window of allowed policy experimentation. Things like seasteading, charter cities are ideas we like but think will not be allowed to blossom if they should breach the narrow window of experimentation allowed among current Western nations.
Democracy is overvalued, monarchy is undervalued. This translates to some advocating monarchy and others dreaming up new systems of government that take this into account.
McCarthy was basically right about the extent of Communist influence in the United States of America after the 1940s. We have weird things like the Harvard Crimson magazine endorsing the Khmer Rouge in the 70s! or FDR’s main negotiator at Yalta being a Soviet spy cropping up constantly when we examine the strange and alien 20th century. McCarthy used some ethically questionable methods against Communists (and yes most of his targets where actual Communists), but if you check them out in detail you will see they are no more extreme or questionable than the ones we have for nearly 80 years now routinely used against Fascists. Why do we live in a Brown scare society while the short second Red scare is by many treated like one of the gravest threats against liberal democracy ever? Why where western intellectuals consistently deluded on Communism from at least the 1920s to as late as the 1980s if they are as trustworthy as they claim?
Psychological differences exist between ethnic groups and between the sexes and these should have implications for into issues like women in combat, affirmative action or immigration.
The horror show of the aftermath of decolonization in some Third World countries was a preventable disaster on the scale of Communist atrocities.
The first three are meta arguments, that contribute to the last four which are object level assessments, that you can make without resorting to the meta arguments.
Do you think the decline of lynching is mere change rather than progress?
The claim that the morality of a society doesn’t steadily, generally, and inexorably increase over time is not the same as the claim that there will be no examples of things that can be reasonably explained as increases in societal morality. If morality is an aggregate of bounded random walks, you’d still expect some of those walks to go up.
To return to the case at hand: the decline of lynching may be an improvement in one area, but you have to weigh it against the explosions in the imprisonment and illegitimacy rates, the total societal collapse of a demographic that makes up over a tenth of the population, drug abuse, knockout games, and so on.
Do you think there’s a causal connection between the decline of lynching and the various ills you’ve listed?
How is causality relevant? The absence of continuous general increase is enough to falsify the Whig-history hypothesis, given that the Whig-history hypothesis is nothing more than the hypothesis of continuous general increase—unless we add to the hypothesis the possibility of ‘counterrevolutionary’ periods where immoral, anti-Whig groups take power and immorality increases, but expressing concern over things like illegitimacy rates, knockout games, and inner-city dysfunction is an outgroup marker for Whigs.
You need evidence actual decline to justify reaction. Othewise, why reverse random drift?
This is always a bad sign in an argument. If causality doesn’t matter, what does?
Demonstrating causality would be doing more work than is necessary. To argue against the hypothesis that the values of A, B, C, … are all increasing, you don’t need to show that an increase in the value of A leads to decreases in any of B, C, …; you just need to demonstrate that the value of at least one of A, B, C, … is not increasing.
(To avert the negative connotations the above paragraph would likely otherwise have: no, I don’t think the decline of lynching caused those various ills.)
(parentheticals added).
You were originally arguing that some weighted sum of A, B, C… was increasing. NancyLebovitz was pointing out that A has clearly decreased, and so for the sum to increase on average, there has to be a correlation between A decreasing and B, C, … increasing. Then she asked if you thought this correlation was causal.
In response, you punted and changed the argument to:
which was a really nice tautological argument.
So while showing causality is “more work than is necessary” for disproving the straw-Whiggery of your previous comment, it doesn’t mean anything for the point NancyLebovitz was raising.
I think people not being assaulted and killed by an angry crowd is good. Vigilantism is a sign of a deficient justice system and insufficient pacification of the population, thus poor governance. I’m happy at the reduction of lynching, but I’m unhappy at the increase of other indicators of depacification and deficient justice systems that seem to have grown worse in Western society.
As a side note this is still a disturbingly common phenomena of mob violence from Nigeria to Madagascar, not to mention Southern Asia and some Latin American countries. I’m also sadly quite unconvinced no lynchings occur in Western states for that matter.
Here is a recent example.
That isn’t an argument amounting to right is right, since the left has its own version...see Chomskys manufactured consent.
What’s more,manufactured consent existed in societies that didn’t run on consent., in the form of actual sermons preached in actual churches and actual cathdrals.
My own attempt at a limited view of moral progress has the following features:
Economic growth, largely driven by secular trends in technology, has resulted in greater surpluses that may be directed towards non-survival goals (c/f Yvain’s “Strive/survive” theorising), some of which form the prerequisites of higher forms of civilisation, and some of which are effectively moral window-dressing.
As per the Cathedral hypothesis, with officially sanctioned knowledge only being related to reality through the likely perverse incentives of the consent factory, this surplus has also been directed towards orthogonal or outright maladaptive goals (in cyclical views of history, Decadence itself).
We no longer have to rationalise the privations of older, poorer societies. This is the sense in which linear moral progress is the most genuine (c/f CEV).
The interaction between the dynamics of holier-than-thou moralising and the anticipatory experience of no longer having to rationalise poverty is complicated. Examination of history reveals the drive for levelling and equalisation to be omnipresent, if not consistently exploitable.
Word counts: Yvain 16k; Land 28k; Moldbug 120k.
I counted Moldbug from this complete copy.
Useful info, thank you! This reinforces my primary recommendation of Land.
No. Well, maybe the third paragraph, except that it’s part of history now and for that reason should be left alone. But otherwise, both your distancing of MoreRight from LessWrong and Eliezer’s distancing of LessWrong from the reactosphere are appropriate and relevant statements of true things.
Yes, you should delete it. Eliezer shouldn’t have written his comment, either.
Could you explain why (for both comments)?
Maybe I can. It seems Elezier was hurriedly trying to make the point that he’s not affiliated with neoreactionaries, out of fear of the name of LessWrong being besmirched.
It’s definitely true, I think, that Elezier is not a neoreactionary and that LessWrong is not a neoreactionary place. Perhaps the source of confusion is that the discussions we have on this website are highly unusual compared to the internet at large and would be extremely unfamiliar and confusing to people with a more politically-oriented mind-killed mindset.
For example, I could see how someone could read a comment like “What is the utility of killing ten sad people vs one happy person” (that perhaps has a lot of upvotes) - which is a perfectly valid and serious question when talking about FAI—and erroneously interpret that as this community supporting, say, eugenics. Even though we both know that the person who asked that question on this site probably didn’t even have eugenics cross their mind.
(I’m just giving this as an example. You could also point to comments about democracy, intersexual relationships, human psychology, etc.)
The problem is that the inferential distance between these sorts of discussions and political discussions is just too large.
Instead of just being reactionary and saying “LessWrong doesn’t support blabla”, it would have been better if Elezier just recommended the author of that post to read the rationality materials on this site.
LessWrong is about the only public forum outside their own blog network that gives neoreaction any airtime at all. It’s certainly the only place I’ve tripped over them.
On the other hand, I at least found the conversation about neoreaction on LW to be vague and confusing and had basically no idea of what the movement was about until I read Yvain’s pieces.
What little I understood of it was having people on LW say how great Moldbug was and why I should read him.
I find it unlikely that the author would do that, or have the right mindset even if he did. So do you mean this would have been more optimal signaling somehow?
Perhaps signaling, and also to get people who are reading the article and comment section to read more about LessWrong instead of coming to possibly the wrong conclusion.
The best move for Eliezer to disassociate LessWrong from reactionaries would be to not mention them at all. Do you see anyone defending the honor of Hacker News in the comment section? Think about what your first instinct is when you say heard someone from some organization, that you know nothing about, explaining they are not actually right wing or Communist or even better, racist?
I agree and that’s why I mentioned he should have just recommended reading the website.
Eliezer’s comment hurt my feelings and I’m not sure why it was really necessary. Responding to something just reinforces the original idea. If rationalists want to reject the Enlightenment, we should have every right to do so, without Eliezer proclaiming that it’s not canon for this community.
If I had still been working for MIRI now, would I be fired because of my political beliefs? That’s the question bothering me. Are brilliant mathematicians going to be excluded from MIRI for having reactionary views?
Part of the comment is basically like, “Scott Alexander good boy. We have paid him recently. Anissimov bad. Bad Anissimov no work for us no more.”
You claim a right not to have your feelings hurt that overrules Eliezer’s right to speak on the matter? That concept of offense-based rights and freedom to say only nice things is one that I am more used to seeing neoreactionaries find in their hated enemies, the progressives. Are you sure you know where you are actually standing?
Eliezer has made a true statement: that neoreaction is not canon for LessWrong or MIRI, in response to an article strongly suggesting the opposite.
Elsethread you write:
So Eliezer shouldn’t say anything, because:
He’s hurting your feelings.
He’s being hypersensitive.
Thank you for making this so clear.
Apparently the supposed Streisand effect applies to him responding to Klint but not to you responding to him. How does that one go?
“Responding to something just reinforces the original idea” touts timidity as a virtue—again, not a sentiment I would ever expect to see penned by any of the neoreactionaries I have read. These are the words of a sheep in wolf’s clothing.
And btw, it looks to me like Eliezer’s wasn’t an official-sounding disavowal, it was an official disavowal.
Your response to Eliezer, both here and in the other thread, comes across as a completely unjustified refusal to take his comment at face-value: Eliezer explaining that he concluded your views were not worth spending time on for quite rational reasons, and is saying so because he doesn’t want people thinking he or the majority of the community he leads hold views which they don’t in fact hold.
This seems to be part of a pattern with you: you refuse to accept that people (especially smart people) really disagree with you, and aren’t just lying about their views for fear or reputational consequences. It’s reminiscent of creationists who insist there’s a big conspiracy among scienitsts to suppress their revolutionary ideas. And it contributes to me being glad that you are no longer working for MIRI, for much the same reasons that I am glad MIRI does not employ any outspoken creationists.
I find this comment a bit mean (and meaner than most of what I saw in this thread or the linked one, tho I haven’t read that one in much detail).
Maybe it’s because other people feel more strongly about this topic than I do; to me “democracy vs. monarchy” is both a confused and fuzzy question and an irrelevant one. Maybe with a lot of effort one can clarify the question and with even more effort, come up with an answer, but then it has no practical consequences.
Chris is obviously being mean-spirited here, and a direct response would only escalate, so I won’t make one.
Not mean-spirited. Just honest. If this were a private conversation, I’d keep my thoughts to myself and leave in search of more rational company, but when someone starts publicly saying things like...
“Eliezer [is] proclaiming that it’s not canon for this community.”
“The comment is basically like, ‘Scott Alexander good boy. We have paid him recently. Anissimov bad. Bad Anissimov no work for us no more.’”
Accusing Eliezer of dismissing an idea out of hand due to fear of public unpopularity.
(all of which are grossly unfair readings of Eliezer’s coment)
...then I think some bluntness is called for.
Not that much more unfair than proclaiming something thoroughly refuted and uninteresting based on a single post rebutting the least interesting claims of only two authors, especially given that what appears to have gotten picked up as the central point of the post (NK/SK) is wrong on many different levels.
Hm, I didn’t feel that Eliezer was being particularly dismissive (and am somewhat surprised by the level of the reactions in this thread here). The original post sort-of insinuated that MIRI was linked to neoreaction, so Eliezer correctly pointed out that MIRI was even more closely linked to criticism of Neoreaction, which seems like what anybody would do if he found himself associated with an ideology he disagreed with—regardless of the public relations fallout of that ideology.
Reminder that the article just said neoreactionaries “crop up” at Less Wrong. Then the author referred to a “conspiracy,” which he admits is just a joke and explicitly says he doesn’t actually believe in it. The fact that Eliezer felt the need to respond explicitly to these two points with an official-sounding disavowal shows hypersensitivity, just like he displayed hypersensitivity in his tone when he reacted to the “Why is Moldbug so popular on Less Wrong?” thread. The tone is one of “Get it off me! Get it off me! Aiyeee!” If he actually wanted to achieve the “get it off me” goal, indifference would be a more effective response.
I routinely read “I was only joking” as “I meant every word but need plausible deniability.”
Silence is often consent & agreement.
Does no official response from Hacker News, which also received the damning accusation that neoreactionaries “crop up” there, imply consent and agreement from Y Combinator?
Given the things PG has said at times, I’m not sure that is a wrong interpretation of matters. Modus ponens, tollens...
There’s a difference between “neoreactionary” and “expresses skepticism against Progressive Orthodoxy”. Paul Graham might be guilty of the latter, but there’s certainly little evidence to judge him guilty of the former.
Are you and Konkvistador using the word with different meanings, the former narrower and the latter broader? or am I missing something? or...
I wasn’t aware we were a courtroom and we were holding our opinions to a level of ‘beyond a reasonable doubt’. I was pointing out that silence is often consent & agreement (which it certainly is), that PG has expressed quite a few opinions a neoreactionary might also hold (consistent with holding neoreactionary views, albeit weak evidence), and he has been silent on the article (weak evidence, to be sure, but again, consistent).
Paul Graham is also a cultural liberal and has the resulting biases. Look at the last section of this essay for a dramatic example.
You should know perfectly well that as long as MIRI needs to coexist and cooperate with the Cathedral (as colleges are the main source of mathematicians) they can’t afford to be thought of as right wing. Take comfort at least in knowing that whatever Eliezer says publicly is not very strong evidence of any actual feelings he may or may not have about you.
I can’t figure out whether the critics believe the Cathedral is right-wing paranoia or a real thing.
MIRI is seen as apolitical. I doubt an offhand mention in a TechCrunch hatchet job is going to change that, but a firm public disavowal might, per the Streisand effect.
From reading HPMOR and some of the sequences (I’m very slowly working my way through them) I get the impression that Eliezer is very pro-enlightenment. I can’t imagine that he’d often explicitly claim to be pro-enlightenment if he weren’t, rather than simply avoiding the whole issue.
The Enlightenment predates democratic orthodoxy. Monarchs like Louis XVI, Catherine II, and Frederick the Great were explicitly pro-Enlightenment.
I had thought that reactionaries were anti-enlightenment though?
It’s complicated. We reject some parts of the Enlightenment but not all. Jayson just listed three of my favorite monarchs, actually.
being pro-enlightment from the perspective of a science fanboy and poly amorous atheist is different than being pro-enlightment as a direct counterargument to reactionary thought. Certainly before I read NR stuff I never thought a reasonable person could claim the enlightenment was a bad thing.
That’s a very interesting phrase.
It may well be true in which case it reflects a very interesting feature of the territory.
Absolutely true.
Eh? Is that because of a more general principle that Eliezer ought not make statements about what is and isn’t LW canon, or is it a special case?
Special case. This site is based around his work so he has every right to decide what it is officially linked to, but the tone of his remarks seemed to go much further than merely disavowing an official connection. Eliezer also states, “More Right” is not any kind of acknowledged offspring of Less Wrong nor is it so much as linked to by the Less Wrong site.”, but More Right is indeed linked to in the blogs section of the Wiki, last time I checked. Also, More Right was founded by LessWrong rationalists applying rationality to reactionary ideas. More Right is indeed an indirect offspring of the LessWrong community, whether community leaders like it or not.
But you’re not a brilliant mathematician – you shouldn’t (even rhetorically) evaluate the consequences of your political actions as they would relate to a hypothetical highly-atypical person. Of course, a genius ( being of immense value) has lots of wiggle room. But you’re not one.
If you still worked at MIRI, you would have negative value. That is, the risk of someone using your writings to tar and feather MIRI would be higher than the expected value of employing you. It’s likely you would be fired, as it would be a rational move. I have no idea how good you were at whatever it was you did for MIRI, but it’s likely there are plenty of candidates of equal abilities who are not publishing blogs that pattern-match with fascist literature.
As being thought of in a political light (especially a political light that the vast majority of prospective contributors and donors find distasteful) would certainly harm MIRI, how could you possibly be offended by something so predictable?
Marcus Hutter just had a little article about AIXI published on the Australian website “The Conversation”. Not much there that will be new to LW readers, though. Includes a link to his presentation at the 2012 Singularity Summit.
Discussion on Hacker News. (As happens far too often on HN, the link there is to the blogspam site phys.org rather than to the original source.)
This is not a problem at all unique to HN.
It’s actually somewhat less of a problem on Hacker News than many sites, because the editors there will proactively change the source to the more canonical. For example, this particular post now points to the original article.
No indeed. But it’s a problem that HN has, which is why I remarked on it. (I don’t see it, for instance, here at Less Wrong.)
I do sometimes get mildly annoyed at people linking to a generic news site about some research (nytimes, etc.) instead of digging up a source as close to the original as possible.
It may not be “blogspam” proper, but it’s research or tech being summarized by journalists, which tends to be less accurate than the original source.
I just realized it’s possible to explain people picking dust in the torture vs. dust specks question using only scope insensitivity and no other mistakes. I’m sure that’s not original, but I bet this is what’s going on in the head of a normal person when they pick the specks.
The dust speck “dillema”—like a lot of the other exercises that get the mathematically wrong answer from most people is triggering a very valuable heuristic. - The “you are trying to con me into doing evil, so fuck off” Heuristic.
Consider the problem as you would of it was a problem you were presented with in real life.
The negative utility of the “Torture” choice is nigh-100% certain. It is in your physical presence, you can verify it, and “one person gets tortured” is the kind of event that happens in real life with depressing frequency. The “Billions of people get exposed to very minor annoyance” choice? How is that causal chain supposed to work, anyway? So that choice gets assigned a very high probability of being a lie.
And it is the kind of lie people encounter very frequently. False hypotheticals in which large numbers of people suffer if you do not take a certain action are a common lever for cons. From a certain perspective, this is what religion is—Attempts to hack people’s utility functions by inserting so absurdly large numbers into the equations so that if you assign any probability at all to them being true they become dominant.
So claims that look like this class of attack routinely get assigned a probability of zero unless they have very strong evidence backing them up because that is the only way to defend against this kind of mental malware.
This is essentially an outside-an-argument argument. If we really had a choice between 50 years of torture and 3^^^3 dust specks, the rational choice would be the 50 years of torture. But the probability of this description of the situation being true, is extremely low.
If you, as a human, in a real-life situation believe that you are choosing between 50 years of torture and 3^^^3 dust specks, almost certainly you are confused or insane. There will not be the 3^^^3 dust specks, regardless of whichever clever argument has convinced you so; you are choosing between an imaginary amount of dust specks and probably a real torture, in which case you should be against the torture.
The only situations where you can find this dilemma in real life are the “Pascal’s mugging” scenarios. Imagine that you want to use glasses to protect your eyes, and your crazy neighbor tells you: “I read in my horoscope today that if you use those glasses, a devil will torture you for 50 years”. You estimate the probability of this to be very low, so you use the glasses despite the warning. But as we know, the probability is never literally zero—you chose avoiding some dust specks in exchange for maybe 1/3^^^3 chance of being tortured for 50 years. And this is a choice reasonable people do all the time.
Summary: In real life it is unlikely to encounter extremely large numbers, so we should be suspicious about them. But it is not unlikely to encounter extremely small probabilities. Mathematically, this is equivalent, but our intuitions say otherwise.
If you were presented it in real life, there would be a lot fewer than 3^^^3 people at stake. You don’t need to consider certainty.
50 years of torture is also something you won’t encounter in real life, but you can get a lot closer.
It probably goes like this: “Well, 3^^^3 is a big number; something like 100. Would I torture a person to prevent 100 people having a dust speck in their eyes? How about 200 or 1000? No, this is obviously a madness.”
Yes. This. Whenever I talk with anyone about the Torture vs. Dust Specks problem, I constantly see them falling into this trap. See, for instance, this discussion post from a few months back, and my reply to it.
This happens again and again, and by this point I am pretty sure that the whole problem boils down to just this.
I know that MIRI doesn’t aim to actually build a FAI and they mainly try to provide material for those that seriously try to do it in the future. At least this is my understanding, correct me if I’m wrong.
But has the work MIRI has done so far brought us closer to building a working FAI or any kind of AGI for that matter? Even a tiny bit?
They do intend on actually building an FAI, but that doesn’t preclude them from also trying to provide material for others who might do it.
From what I’ve seen in the last year, MIRI has sort-of backpedaled on the “actually building an AGI/FAI” goal, and pushed forward in their public declarations the “milder” goal of ensuring a positive impact of AGI when it finally gets created by someone.
This seems consistent with the first sentence in their mission statement:
Are you sure? The linked comment was posted in August of this year. I have seen them push the second goal more strongly recently, but I don’t think the first goal has been rescinded.
It’s more of an impression of mine than an actual statement of theirs.
Not sure where exactly to ask but here goes:
Sparked by the recent thread(s) on the Brain Preservation Foundation and by my Grandfather starting to undergo radiation+chemo for some form of cancer. While timing isn’t critical yet, I’m tentatively trying to convince my Mother (who has an active hand in her Fathers’ treatment) into considering preservation as an option.
What I’m looking for is financial and logistical information of how one goes about arranging this starting from a non-US country, so if anyone can point me at it I’d much appreciate it.
At the risk of giving Alcor free advertising, there’s information on their website: http://alcor.org/BecomeMember/
They can fly patients in from non-US countries, but they recommend that patients be in the USA if they feel they have a significant risk of death because it greatly decreases the risk of brain damage during transport.
Also, most alcor members finance their preservation through health insurance. This might complicate things for your situation.
A very readable new paper on causality on Andrew Gelman’s blog: Forward causal inference and reverse causal questions. It doesn’t have any new results, but motivates asking “why” questions in addition to “what if” questions to facilitate model checking and hypothesis generation. Abstract:
See also this discussion post on LW.
I recently started using Habit RPG, which is a sort of gamified todolist where you get gold and XP for doing your tasks and not doing what you disapprove of.
Previously I had been mostly using Wunderlist (I also tried Remember The Milk, but found the features too limited), and so far Habit RPG looks better than Wunderlist on some aspects (more fine-grained control of the kind of tasks you put in it, regular vs. one-off vs. habits), and of course has an extra fun aspect.
Anybody else been trying it? (I saw it mentioned a few times on LW) Anybody else want to try?
I added so many dailies that I basically found the overhead of updating HRPG ate into the productive energy needed to complete the dailies, and ended up in a death cycle. I then decided to use it explicitly only for tasks at work, and have been keeping up with it ever since, but I do miss the sweet spot of personal life productivity that I hit before that crash.
I use it. I find that it doesn’t motivate me too much to do things unless they’re dailies or I treat them as such; for example, I’ve had “compliment someone” as a habit for months, but I don’t think it’s actually made me compliment people more except maybe a small burst when I added it. But I find it more rewarding (not necessarily more effective) than beeminder for dailies; I don’t care about the yellow brick road, but I enjoy getting XP and gold and pets.
PM your ID to kremlin if you want to join our party then :)
I haven’t tried beeminder, I’m glad to see some anecdata that habitrpg is better.
I wouldn’t say either is better. HRPG doesn’t easily let you say “I want to do this X times a week” for values of X other than 7, unless you want to specify the days in advance. I also think (with not especially high confidence) that “I’m about to fail at beeminder” is more motivating than “I’m about to lose health in HRPG” even if I don’t have a pledge; but if I’m ahead, HRPG is better at motivating me to keep going. My HRPG has only occasionally got to the point where I was worried about dying, and then fear of failure was more motivating than normal, but usually I can coast.
I didn’t tried it, but I believe I wrote my own gamification system. It is a simple time tracking system with a history viewer, from which I clock in my activity for the day. I have my own goals that I tried to achieve everyday. It was very effective in keeping me working 30 hours each week for 7 weeks so far.
Have they fixed their memory leak problem? I stopped using it after my frustration exceeded a pretty reasonable threshold.
I mostly use the web-based version, so memory leaks aren’t really an issue. On what platform did you have problems? (I also have the android app, but don’t use it much)
See here and here. I used the web version, which was extremely buggy. From your comments I assume it might not be that terrible anymore, so I’ll look at using it again.
Yes, it’s improved considerably.
Ah okay, I don’t know then; that was a few months ago and it would take some digging to figure out whether they did that server upgrade.
All I can say is that I haven’t run into any major technical issues yet.
I use it extensively and it’s been by far the most successful out of all the productivity systems I’ve tried. For one, I’ve stuck with it since February of this year, whereas most of my past attempts have lasted around a month. For another, there’s enough community aspects to it that a couple of times when my productivity has been low I’ve gotten a lot of encouragement from the community to get started again
Cool, will you join our party (PM kremlin) or are you already in one?
Already in one, sorry :) I suspect that one of the other guys in my party is also a stealth LWer, at least he seems to carry a lot of the local memes.
You and I were talking about this in IRC. I remember expressing a concern about HabitRPG that, while it does genuinely motivate me at the moment, I’m not sure what’s going to happen when it ends: when I’ve upgraded all my items, when I’ve collected all the pets, etc etc. If I just start over, the new game will likely motivate me significantly less than the first time around. And more than likely I just plain won’t want to start over.
I’ve been trying to think of ways around this gamification problem, because it plays a part in nearly every attempt at gamification I’ve seen. I think that, for one aspect of gamification—motivating yourself to learn new things—there is a way that at least sort of overcomes the ‘what happens when it ends?’ problem:
Skill Trees. Like This . Maybe a website, or application, that starts with just the bare-bones code for creating skill trees, and you can create an account and add a skill tree to your account from a list of existing searchable skill trees, or you can create your own skill tree if you can’t find one that’s appropriate for you and that will allow other people with similar goals to add your skill tree system to their account, etc.
BTW I’ve started a LessWrong Party on HabitRPG for when they start implementing new mechanics that will take advantage of parties. If anybody wants to join the party send me your User ID, which you can find in Settings > API
Not sure about how the system works, just started using it yesterday, so I hope I will not bring doom to the whole party.
I’ve invited you.
I’m sure you’ll be fine. It’s not until they start adding the new boss/quest mechanics that it will be possible for anyone to bring doom to a party.
invite sent
… and I joined it, and encourage others to join too! :)
That definitely looks interesting, and I’ve been thinking about pretty similar things. I hadn’t found Dungeons and Developers, and it is pretty neat (though it’s just a proof-of-concept / a fancy way of displaying a list of web development skills).
Looks kinda interesting. When I was 18 or so, I tried setting up a similar system, but it failed for a couple of reasons. I’ll give it a try. Since I’ve noticed that small (online) notifications (such as upvotes or reblogs) make me feel good, this to-do list might trigger a similar response.
… and I see you’ve joined our party, yay!
LW meta: I have received a message from “admin”:
I have seen, indeed, options to create a wiki account. But I already have one; how do I properly associate the existing accounts?
I received three or four messages like this.
It seems the mythical creature known as “admin” is buggy :-D
Experience that I had recently that I found interesting:
So, you may have noticed that I’m interested in causality. Part of my upcoming research is using pcalg (which you may have heard of) to identify the relationships between sensors on semiconductor manufacturing equipment, so that we can apply work done earlier in my lab where we identify which subsystem of a complex dynamic system is the root cause of an error. It’s previously been applied in automotive engineering, where we have strong first principles models of how the systems interact, but now we want to do it in semiconductor, where we don’t have first principles models of how the systems interact, and need to learn those models from data.
Time to get R installed and pcalg downloaded correctly on Ubuntu: ~2 hours. (One of the packages that pcalg requires requires R 3.0, which you need to modify the ubuntu update files to get instead of R 2.14, and a handful of other things went wrong.)
Time to figure out how to get my data into R with labels: ~2 minutes.
Time to run the algorithms to discover the causal network for the subsystem I have data for now: ~2 seconds.
I’m not sure I should also count the time spent learning about causality in the first place (which I would probably estimate at ~2 weeks), but it’s striking how much of the investment in generating the results is capital, and how little of it is labor. That is, now that I have the package downloaded, I can do this easily for other datasets. Time to start picking some low-hanging fruit.
(Living in the future is awesome: as much as I complained about all the various rabbit holes I had to go down while installing pcalg, it took way less time than it would have taken me to code the algorithms myself, and I doubt I would have done anywhere near as a good a job at it.)
Absolutely. When I look at my own projects, they go like ‘gathering and cleaning data: 2 months. Figuring out the right analysis the first time: 2 days. Runtime of analysis: 2 hours.’
The first time this happened to me, it drove me nuts. It reminded me of writing my first program, where it took maybe 20 minutes to write it even looking everything up, and then 2 hours to debug. That was when the true horror of programming struck me. Years later, when I came across the famous quote by Wilkes that “I can remember the exact instant when I realized that a large part of my life from then on was going to be spent in finding mistakes in my own programs.” I instantly knew it for truth.
(Same for http://lesswrong.com/lw/j8f/anonymous_feedback_forms_revisited/ - downloading the data and figuring out how to join up the two CSVs in just the right way, an irritating hour. The logistic regression? Maybe 3 minutes playing around with different predictor variables.)
This realization caused one of my big life mistakes, I think. It struck me in high school, and so I foolishly switched my focus from computer science to physics (I think, there might have been a subject or two in between) because I disliked debugging. Later, I realized that programming was both powerful and inescapable, and so I’d have to get over how much debugging sucked, which by then I had the emotional maturity to do (and I suspect I could have done back then, if I had also had the realization that much of my intellectual output would be programming in basically any field).
I think the whole experience is also interesting on a meta-level. Since programming is essentially the same as logical reasoning, it goes to show that humans are very nearly incapable of creating long chains of reasoning without making mistakes, often extremely subtle ones. Sometimes finding them provides insight (especially in multi-threaded code or with memory manipulation), although most often it’s just you failing to pay attention.
Threading is not normally part of logical reasoning. Compare with mathematics, where even flawed proofs are usually (though not always) of correct results. I think a large part of the difficulty of correct programming is the immaturity of our tools.
Statins by numbers:
Good Judgment Project is temporarily signing up more participants: http://www.goodjudgmentproject.com/
(Fairly easy way to make $100 a year or so, if that’s a motivating factor.)
Ron Arkin, author of Governing Lethal Behavior in Autonomous Robots, on autonomous lethal robots:
I often find that postings voted little nonetheless sometimes come with a lively and up-voted discussion.
The listing of postings shows the karma of the posting but gives no indication of the volume and quality of the discussion.
At least for the display of a positing it should be easy to display an additional karma score indicating the sum total of the conments. That’d give an indication of the aggregate.
This would improve awareness of positing discussions. On the other hand such a scoring might further drain away participation from topics which fail to attract discussion.
I would take it farther—I’d like to see people get a small percentage of the karma from the discussions their posts and comments generate.
I would worry that this would incentivize controversial discussions.
As long as the controversial discussions are up-voted that shouldn’t be a problem. Except if you disagree with the classical system of thesis, antithesis, synthesis.
Votes in controversial discussions are usually more about signaling than anything else.
Indeed. But that doesn’t mean that we cannot infer signal from that.
Human emotions are also primary signals. And nonetheless you can e.g. use perception of shouting (accomanying anger) to locate conflict areas in a social group. In a way karma expenditure is such a shouting and draws attention.
The problem somewhat is that karma is one-dimensional. Each emotion-pair is a dimension and we have no way to signal e.g. happiness, fear, awe, … Slashdot for example has the funny tag. That could be used.
And entirely different approach would be to vote the votes. But for that the votes would need to be visible. And voting would have to have an associated cost.
One obvious problem with that system; what happens with habitually bad posters?
Let’s say I write something so insipid and worthless that it’s worth every downvote on the site… and then a better-quality poster writes an excellent point-by-point take-down of it and gets tons of upvotes for it. Should I then benefit from “generating” such a high quality rebuttal, or is that just going to weaken the already weak incentive structure the karma system is supposed to be creating?
I can think of a good case just in the last few days of a poor-quality poster who would seriously benefit from this system, and as a long time poster here you can probably think of more.
I don’t think that an excellent point-by-point take-down in a comment is not a good idea because
a) It is not very visible and if it is excellent and makes a point it should be done as an independent posting.
b) Writing a point-by-point take-down may be overkill and alienate the initial poster. Compensating him with karma may make good for this (and motivate the commenter to post separately).
c) I think individual counters should be addressed by individual comments to allow the to be voted and commented individually.
In the remaining cases and if the point-by-point reply is well meaning and clarifies matters that were unclear for the initial poster: Why shouldn’t he get some credit for honstly (possibly mustering some courage) bringung up a question?
It is of course important to choose a suitable fraction of the comment karma.
You have motivated the better poster to write an excellent post.
If you original post was really insipid and useless it would just be ignored. Capable people rarely waste effort on refuting truly worthless stuff.
How confident of that are you?
Not very. Note my hedging in mentioning “capable” people :-)
I think that in the short term there is the incentive to pile onto the stupid post and shred it to bits. But the bloom on this flower fades very rapidly. Smart people tend to realize that it’s not a good use of their time.
Contrast this to a nonstupid but controversial position which motivates someone to write an excellent piece—for an example consider Yvain’s anti-neoreactionary FAQ.
Thinking a bit about it I don’t think it would be a good idea to add the generated karma to the top poster or even a fixed fraction of it. The fraction should decrease with distance from the root because all intermediate commentes also deserve a share of the pot and the pot shouldn’t increase by itself. The function of distance should be between harmonic and exponential I think. Or it could be tuned by sampling actual comment trees (only that your whole idea is to influence the shape of the tree).
Do you mean that as an orthogonal direction thus not showing in the karma associated with the post but only on their ‘account’.
Or do you mean it that the aggregated karma score I proposed is actual karma the poster gets?
I was thinking it would go into their single listing of karma, but it might be better as a separate number.
I am interested in getting better at negotiation. I have read lots on the subject, but I have realised that I was not very careful about what I read, and what evidence the authors had for their advice. I have stumbled on a useful heuristic to find better writing.
The conventional wisdom in negotiating says you should try very hard to not be the first person to mention a price.
I can see that, if you’re the seller, it may make sense to try to convince the buyer that they want the product before you start the price negotiation.
When it comes to the price negotiation though, I am not confident about the conventional wisdom.
Given what we know about anchoring, it seems more likely that the first person to say a figure will tend to anchor the negotiation to that figure, and this will influence the negotiation towards that figure.
I hadn’t read anything discussing this until I thought of it myself. But searching for ‘anchoring’ as well as ‘negotiation’ qives plenty of links to negotiation advice that looks much more evidence-based than what I’d read before: example. Checking for ‘anchoring’ in the index would probably work as a filtering technique for books on the subject, but I have not tested this.
You could anchor without giving your price first.
“Different cars have different prices. I have seen ones worth $ 100 000, but I guess this one really isn’t one of them. How much would you give for this car?”
I’ve used anchoring successfully when negotiating a TV price at a large electronic retailer. I used the price of a cheaper TV with less features as an anchor. YMMV.
A while back, I tried to learn more about the efficacy of various blindness organizations. Recently, I realized that (1) I really need some sort of training, and (2) my research method was awful, and I should have just looked up notable blind people and looked for patterns in their training.
I avoided doing the “look at blind people” research out of it sounding boring, until not even an hour ago, when I just opened all the pages on blind Americans on Wikipedia and skimmed them all for the information in question.
Either most notable blind Americans don’t get any noteworthy training, or this was left out of a great many wiki pages. Only four persons had specific training mentioned, all from different, mostly local schools; one other had training implied (a guidedog was mentioned), while most of the others could conceivably ignore anything more than a braille/cane instructor (and several could have gone without those, based on their articles).
So, apparently, no organization has a monopoly on success, however loudly some proclaim otherwise. This doesn’t help me shop for a training center/method, but it’s data.
Just finished MaddAddam, the conclusion of Margaret Atwood’s Oryx and Crake dystopian trilogy. It is very well written and rather believable. Her dry sardonic humor is without peer. There are many LW-relevant ideas and quotes in it, but here is one that is most un-LW, about a cryo preservation company CryoJeenyus:
In these situations in real life I often find it’s simply because people aren’t aware of the science behind cryogenics.
Would anyone be interested in going to a LW meetup in the North-East of England? I’m thinking Newcastle or Durham.
If you’re local, I suggest announcing a meetup and seeing who shows up.
What the hell is going on with all the ads here? I’ve got keywords highlighted in green that pop up ads when you mouse over them, stuff in the top and sidebars of the screen, popups when loading new pages… all of this since yesterday.
Normally I would think this sort of thing meant I had a virus (and I am scanning for one with everything I have) but other people have been complaining about stuff like this as well over the last few days.
I would be glad to donate if the site needs more money to stay up, but this is absolutely unacceptable.
[Edit: Never mind, it really was a virus.]
Post a source dump of your LW page to pastebin..?
I have nothing like that and my early-warning systems show nothing on LW pages except for viglink and the usual Google Analytics.
Actually, it turned out the problem was on my end. Sorry for the fuss.
This is an extraordinary claim by Eliezer Yudkowsky that progress is a ratchet that moves in only one direction. I wonder what, say, the native Americans circa 1850 thought about Western notions of progress? If you equate “power” with “progress” this claim is somewhat believable, but if you’re also trying to morally characterize the arc of history then it sounds like you’ve descended into progressive cultism and fanaticism.
So, now replying knowing your context, this actually came up in discussion with Eliezer at the dinner after his talk at MIT. The most agreed upon counterexample was more restrictive drug laws. But if one interprets Eliezer’s statement as being slightly more poetic and allowing that occasional slips do occur but that the general trend is uni-directional, that looks much more plausible. And the opinion of the general American population in 1850 in many ways doesn’t enter into that: most of that population took for granted factually incorrect statements about the universe that we can confidently say are wrong (e.g. not just religious belief but belief in a literal global flood and many other aspects of the Abrahamic religions which are demonstrably false).
What is the example?
that restrictive laws go against the Enlightenment? or that Prohibition was reversed and people are expecting other drug laws to be reversed?
The idea was that in general, more restrictive drugs laws, which became more common in the 20th century, were a step backwards.
A step backwards by what metric?
Without a metric, how is this any different from saying that the fall of monarchies in the 20th century is a step backwards?
You were responding to DI, who asked about Native Americans of 1850. Like those of today, they would probably applaud restrictions on alcohol and condemn restrictions on their own intoxicants substances. A very simple, first approximation theory of prohibition is that the West conquered the world and restricted intoxicants to its favorite. It was too late to ban coffee and cigarettes (and maybe stimulants are held to different standards), but it banned other drugs as soon as it noticed them.
Sure, people in 1850 had lots of false beliefs. Which ones are relevant to drugs?
That wasn’t the point of the drug example. The point of the drug example was that Eliezer agrees that morality hasn’t gone in an absolute one way direction.
Fine, but by making “less factually incorrect statements about the universe” your measure of the good, you’ve essentially assumed what you’re trying to show—the superiority of Enlightenment-based notions of progress.
Not really. Someone can have a detailed and correct understanding of the universe and not have that impact there morals. What’s relevant here is that some of those aspects directly inform morals. We know now that an Abrahamic deity is extremely unlikely or for that matter most other classical deity notions. Thus, morals, values or general deontological rules based on divine revelation are not by themselves worth looking at. Similarly, at a meta-level we know that when people do discuss issues where morality disagrees to pay less attention to arguments based off of religious texts.
Did you mean to make this as a reply to another comment or was “This” meant to link somewhere?
Apologies, my reply didn’t work correctly. I was referring to his comment at this thread: http://techcrunch.com/2013/11/22/geeks-for-monarchy/
“The ratchet of progress turns unpredictably, but it doesn’t turn backward.”
If you speak Slovak or Czech language and you are a LW fan, join us in a new subreddit “rozum”!
You could set up a comment voting system based on the theory of forum quality that only high-value comments in response to other high-value comments indicate a healthy forum. Make any upvote or downvote on a child comment also upvote or downvote the parent comment it is replying to. So you’d get a bonus of upvotes if someone replies to your upvoted comment with another upvoted comment, but people would be hesitant to reply anything to comments being downvoted since their replies probably wouldn’t be upvoted to keep the parent comment from getting upvotes.
This might be a thing that needs to be plugged in a forum from the start and let the local culture form around it instead of dropping it in an existing forum. Also there might be the problem that votes tend to degenerate to track consensus agreement instead of comment quality, and this system might exacerbate some groupthink failure modes.
We could test this theory by using it on the existing data and selecting the best comments under this theory. I would be interested in reading the “top 20 (or 50) LW comments ever” found by this algorithm, posted as a separate article. It could give us an approximate idea of what exactly the new system would incentiize.
Is there a good canonical source for all LW comments ever? I’m interested in importing the data into Python and playing around with ranking algorithms. (I’m not sure what disclaimer to use to keep others from not doing the same just because I publicly said that I’m interested in it, but yeah, feel free to duplicate work and come up with other interesting analyses)
You could ask matt to send you the necessary parts of the database.
There’s this and this. Maybe they allow you to go all the way back to the beginning.
It’s probably doable to use those to scrape comments and put them into some kind of list or database, but spending time looting LW comments that way seems like wasted effort compared to getting a full dump from an official source.
Ask a lot of good questions so that other people do the real work and say lots of stupid shit to people you don’t like. Would this be sufficient to game this system? :)
Incentivizing asking a lot of good questions but not so many that people who might answer such questions get overloaded might be a good thing.
Someone should make a top level post introducing a contagious meme for upvoting good questions. Good questions are upvoted too rarely, but I don’t think we need software to fix it.
Absolutely. For my own part, I find that getting my good questions answered provides sufficient incentive, but I’m willing to believe that I’m atypical in that respect and further incentive above that up to a threshold would be beneficial,, and I have no idea where we are relative to that threshold.
I don’t really think this is relevant to LessWrong per se, but I’m wondering if any smart folks here have attempted to solve this “internet mystery”:
Pattern-matches to marketing campaign. Though an unusually long-lived one if so.
Is there a way to disable editing on a single comment of mine?
Why would you want to do that? If you want to make a guarantee that the comment isn’t edited later, that happens automatically: comments get a little asterisk by the date if they’ve been edited. Or you can ask people to reply to the comment quoting it. If multiple people do so, that’s a good guarantee it hasn’t been edited later.
Ah, didn’t know that about the asterisk. Thanks.
Even harder than recognizing proto-sciences—how could you recognize a basis for a system which hasn’t yet been invented?
The real world example is the collection of astronomical data long before there was any astronomy.
Making up new systems isn’t that hard. The difficult thing is to make up systems that are useful.
Apologies, I was referring to his comment at this thread: http://techcrunch.com/2013/11/22/geeks-for-monarchy/
“The ratchet of progress turns unpredictably, but it doesn’t turn backward.”
Um, you know you can reply to comments directly here? In the lower right hand of a comment there’s a bunch of buttons, the most left which is the reply button.