Epistle to the New York Less Wrongians
(At the suggestion and request of Tom McCabe, I’m posting the essay that I sent to the New York LW group after my first visit there, and before the second visit:)
Having some kind of global rationalist community come into existence seems like a quite extremely good idea. The NYLW group is the forerunner of that, the first group of LW-style rationalists to form a real community, and to confront the challenges involved in staying on track while growing as a community.
“Stay on track toward what?” you ask, and my best shot at describing the vision is as follows:
“Through rationality we shall become awesome, and invent and test systematic methods for making people awesome, and plot to optimize everything in sight, and the more fun we have the more people will want to join us.”
(That last part is something I only realized was Really Important after visiting New York.)
Michael Vassar says he’s worried that you might be losing track of the “rationality” and “world optimization” parts of this—that people might be wondering what sort of benefit “rationality” delivers as opposed to, say, paleo dieting. (Note—less worried about this now that I’ve met the group in person. -EY.)
I admit that the original Less Wrong sequences did not heavily emphasize the benefits for everyday life (as opposed to solving ridiculously hard scientific problems). This is something I plan to fix with my forthcoming book—along with the problem where the key info is scattered over six hundred blog posts that only truly dedicated people and/or serious procrastinators can find the time to read.
But I really don’t think the whole rationality/fun association you’ve got going—my congratulations on pulling that off, by the way, it’s damned impressive—is something that can (let alone should) be untangled. Most groups of people capable of becoming enthusiastic about strange new nonconformist ways of living their lives would have started trying to read each other’s auras by now. Rationality is the master lifehack which distinguishes which other lifehacks to use.
The way an LW-rationality meetup usually gets started is that there is a joy of being around reasonable people—a joy that comes, in a very direct way, from those people caring about what’s true and what’s effective, and being able to reflect on more than their first impulse to see whether it makes sense. You wouldn’t want to lose that either.
But the thing about effective rationality is that you can also use it to distinguish truth from falsehood, and realize that the best methods aren’t always the ones everyone else is using; and you can start assembling a pool of lifehacks that doesn’t include homeopathy. You become stronger, and that makes you start thinking that you can also help other people become stronger. Through the systematic accumulation of good ideas and the rejection of bad ideas, you can get so awesome that even other people notice, and this means that you can start attracting a new sort of person, one who starts out wanting to become awesome instead of being attracted specifically to the rationality thing. This is fine in theory, since indeed the Art must have a purpose higher than itself or it collapses into infinite recursion. But some of these new recruits may be a bit skeptical, at first, that all this “rationality” stuff is really contributing all that much to the awesome.
Real life is not a morality tale, and I don’t know if I’d prophesy that the instant you get too much awesome and not enough rationality, the group will be punished for that sin by going off and trying to read auras. But I think I would prophesy that if you got too large and insufficiently reasonable, and if you lost sight of your higher purposes and your dreams of world optimization, the first major speedbump you hit would splinter the group. (There will be some speedbump, though I don’t know what it will be.)
Rationality isn’t just about knowing about things like Bayes’s Theorem. It’s also about:
Saying oops and changing your mind occasionally.
Knowing that clever arguing isn’t the same as looking for truth.
Actually paying attention to what succeeds and what fails, instead of just being driven by your internal theories.
Reserving your self-congratulations for the occasions when you actually change a policy or belief, because while not every change is an improvement, every improvement is a change.
Self-awareness—a core rational skill, but at the same time, a caterpillar that spent all day obsessing about being a caterpillar would never become a butterfly.
Having enough grasp of evolutionary psychology to realize that this is no longer an eighty-person hunter-gatherer band and that getting into huge shouting matches about Republicans versus Democrats does not actually change very much.
Asking whether your most cherished beliefs to shout about actually control your anticipations, whether they mean anything, never mind whether their predictions are actually correct.
Understanding that correspondence bias means that most of your enemies are not inherently evil mutants but rather people who live in a different perceived world than you do. (Albeit of course that some people are selfish bastards and a very few of them are psychopaths.)
Being able to accept and consider advice from other people who think you’re doing something stupid, without lashing out at them; and the more you show them this is true, and the more they can trust you not to be offended if they’re frank with you, the better the advice you can get. (Yes, this has a failure mode where insulting other people becomes a status display. But you can also have too much politeness, and it is a traditional strength of rationalists that they sometimes tell each other the truth. Now and then I’ve told college students that they are emitting terrible body odors, and the reply I usually get is that they had no idea and I am the first person ever to suggest this to them.)
Comprehending the nontechnical arguments for Aumann’s Agreement Theorem well enough to realize that when two people have common knowledge of a persistent disagreement, something is wrong somewhere—not that you can necessarily do better by automatically agreeing with everyone who persistently disagrees with you; but still, knowing that ideal rational agents wouldn’t just go around yelling at each other all the time.
Knowing about scope insensitivity and diminishing marginal returns doesn’t just mean that you donate charitable dollars to “existential risks that few other people are working on”, instead of “The Society For Curing Rare Diseases In Cute Puppies”. It means you know that eating half a chocolate brownie appears as essentially the same pleasurable memory in retrospect as eating a whole brownie, so long as the other half isn’t in front of you and you don’t have the unpleasant memory of exerting willpower not to eat it. (Seriously, I didn’t emphasize all the practical applications of every cognitive bias in the Less Wrong sequences but there are a lot of things like that.)
The ability to dissent from conformity; realizing the difficulty and importance of being the first to dissent.
Knowing that to avoid pluralistic ignorance everyone should write down their opinion on a sheet of paper before hearing what everyone else thinks.
But then one of the chief surprising lessons I learned, after writing the original Less Wrong sequences, was that if you succeed in teaching people a bunch of amazing stuff about epistemic rationality, this reveals...
(drum roll)
...that, having repaired some of people’s flaws, you can now see more clearly all the other qualities required to be awesome. The most important and notable of these other qualities, needless to say, is Getting Crap Done.
(Those of you reading Methods of Rationality will note that it emphasizes a lot of things that aren’t in the original Less Wrong, such as the virtues of hard work and practice. This is because I have Learned From Experience.)
Similarly, courage isn’t something I emphasized enough in the original Less Wrong (as opposed to MoR) but the thought has since occurred to me that most people can’t do things which require even small amounts of courage. (Leaving NYC, I had two Metrocards with small amounts of remaining value to give away. I felt reluctant to call out anything, or approach anyone and offer them a free Metrocard, and I thought to myself, well, of course I’m reluctant, this task requires a small amount of courage and then I asked three times before I found someone who wanted them. Not, mind you, that this was an important task in the grand scheme of things—just a little bit of rejection therapy, a little bit of practice in doing things which require small amounts of courage.)
Or there’s Munchkinism, the quality that lets people try out lifehacks that sound a bit weird. A Munchkin is the sort of person who, faced with a role-playing game, reads through the rulebooks over and over until he finds a way to combine three innocuous-seeming magical items into a cycle of infinite wish spells. Or who, in real life, composes a surprisingly effective diet out of drinking a quarter-cup of extra-light olive oil at least one hour before and after tasting anything else. Or combines liquid nitrogen and antifreeze and life-insurance policies into a ridiculously cheap method of defeating the invincible specter of unavoidable Death. Or figures out how to build the real-life version of the cycle of infinite wish spells. Magic the Gathering is a Munchkin game, and MoR is a Munchkin story.
It would be really awesome if the New York Less Wrong groups figures out how to teach its members hard work and courage and Muchkinism and so on.
It would be even more awesome if you could muster up the energy to track the results in any sort of systematic way so that you can do small-N Science (based on Bayesian likelihoods thank you, not the usual statistical significance bullhockey) and find out how effective different teaching methods are, or track the effectiveness of other lifehacks as well—the Quantitative Self road. This, of course, would require Getting Crap Done; but I do think that in the long run, whether we end up with really effective rationalists is going to depend a lot on whether we can come up with evidence-based metrics for how well a teaching method works, or if we’re stuck in the failure mode of psychoanalysis, where we just go around trying things that sound like good ideas.
And of course it would be really truly amazingly awesome if some of you became energetic gung-ho intelligent people who can see the world full of low-hanging fruit in front of them, who would go on to form multiple startups which would make millions and billions of dollars. That would also be cool.
But not everyone has to start a startup, not everyone has to be there to Get Stuff Done, it is okay to have Fun. The more of you there are, the more likely it is that any given five of you will want to form a new band, or like the same sort of dancing, or fall in love, or decide to try learning meditation and reporting back to the group on how it went. Growth in general is good. Every added person who’s above the absolute threshold of competence is one more person who can try out new lifehacks, recruit new people, or just be there putting the whole thing on a larger scale and making the group more Fun. On the other hand there is a world out there to optimize, and also the scaling of the group is limited by the number of people who can be organizers (more on this below). There’s a narrow path to walk between “recruit everyone above the absolute threshold who seems like fun” and “recruit people with visibly unusually high potential to do interesting things”. I would suggest making extra effort to recruit people who seem like they have high potential but not anything like a rule. But if someone not only seems to like explicit rationality and want to learn more, but also seems like a smart executive type who gets things done, perhaps their invitation to a meetup should be prioritized?
So that was the main thing I had to say, but now onward to some other points.
A sensitive issue is what happens when someone can’t reach the absolute threshold of competence. I think the main relevant Less Wrong post on this subject is “Well-Kept Gardens Die By Pacifism.” There are people who cannot be saved—or at least people who cannot be saved by any means currently known to you. And there is a whole world out there to be optimized; sometimes even if a person can be saved, it takes a ridiculous amount of effort that you could better use to save four other people instead. We’ve had similar problems on the West Coast—I would hear about someone who wasn’t Getting Stuff Done, but who seemed to be making amazing strides on self-improvement, and then a month later I would hear the same thing again, and isn’t it remarkable how we keep hearing about so much progress but never about amazing things the person gets done -
(I will parenthetically emphasize that every single useful mental technique I have ever developed over the course of my entire life has been developed in the course of trying to accomplish some particular real task and none of it is the result of me sitting around and thinking, “Hm, however shall I Improve Myself today?” I should advise a mindset in which making tremendous progress on fixing yourself doesn’t merit much congratulation and only particular deeds actually accomplished are praised; and also that you always have some thing you’re trying to do in the course of any particular project of self-improvement—a target real-world accomplishment to which your self-improvements are a means, not definable in terms of any personality quality unless it is weight loss or words output on a writing project or something else visible and measurable.)
- and the other thing is that trying to save people who cannot be saved can drag down a whole community, because it becomes less Fun, and that means new people don’t want to join.
I would suggest having a known and fixed period of time, like four months, that you are allowed to spend on trying to fix anyone who seems fixable, and if after that their outputs do not exceed their inputs and they are dragging down the Fun level relative to the average group member, fire them. You could maybe have a Special Committee with three people who would decide this—one of the things I pushed for on the West Coast was to have the Board deciding whether to retain people, with nobody else authorized to make promises. There should be no one person who can be appealed to, who can be moved by pity and impulsively say “Yes, you can stay.” Short of having Voldemort do it, the best you can do to reduce pity and mercy is to have the decision made by committee.
And if anyone is making the group less Fun or scaring off new members, and yes this includes being a creep who offends potential heroine recruits, give them an instant ultimatum or just fire them on the spot.
You have to be able to do this. This is not the ancestral environment where there’s only eighty people in your tribe and exiling any one of them is a huge decision that can never be undone. It’s a large world out there and there are literally hundreds of millions of people whom you do not want in your community, at least relative to your current ability to improve them. I’m sorry but it has to be done.
Finally, if you grow much further it may no longer be possible for everyone to meet all the time as a group. I’m not quite sure what to advise about this—splitting up into meetings on particular interests, maybe, but it seems more like the sort of thing where you ought to discuss the problem as thoroughly as possible before proposing any policy solutions. My main advice is that if there’s any separatish group that forms, I am skeptical about its ability to stay on track if there isn’t at least one high-level epistemic rationalist executive type to organize it, someone who not only knows Bayes’s Theorem but who can also Get Things Done. Retired successful startup entrepreneurs would be great for this if you could get them, but smart driven young people might be more mentally flexible and a lot more recruitable if far less experienced. In any case, I suspect that your ability to grow is going to be ultimately limited by the percentage of members who have the ability to be organizers, and the time to spend organizing, and who’ve also leveled up into good enough rationalists to keep things on track. Implication, make an extra effort to recruit people who can become organizers.
And whenever someone does start doing something interesting with their life, or successfully recruits someone who seems unusually promising, or spends time organizing things, don’t forget to give them a well-deserved cookie.
Finally, remember that the trouble with the exact phrasing of “become awesome”—though it does nicely for a gloss—is that Awesome isn’t a static quality of a person. Awesome is as awesome does.
- An Outside View on Less Wrong’s Advice by 7 Jul 2011 4:46 UTC; 84 points) (
- Post ridiculous munchkin ideas! by 15 May 2013 22:27 UTC; 80 points) (
- Location Discussion Takeaways by 2 Nov 2020 21:14 UTC; 77 points) (
- Replace yourself before you stop organizing your community. by 22 Jul 2018 20:57 UTC; 65 points) (
- If Clarity Seems Like Death to Them by 30 Dec 2023 17:40 UTC; 46 points) (
- What Is Optimal Philanthropy? by 12 Jul 2012 0:17 UTC; 39 points) (
- New York Less Wrong: Expansion Plans by 1 Jul 2012 1:20 UTC; 23 points) (
- 6 Aug 2011 18:31 UTC; 18 points) 's comment on Raise the Age Demographic by (
- How does real world expected utility maximization work? by 9 Mar 2012 11:20 UTC; 17 points) (
- 17 Sep 2017 20:06 UTC; 9 points) 's comment on LW 2.0 Strategic Overview by (
- Entropy and social groups by 27 Apr 2011 13:59 UTC; 8 points) (
- 17 Feb 2015 19:05 UTC; 5 points) 's comment on Deconstructing the riddle of experience vs. memory by (
- 25 May 2011 7:57 UTC; 2 points) 's comment on Theism, Wednesday, and Not Being Adopted by (
- 21 Apr 2011 15:30 UTC; 2 points) 's comment on [SEQ RERUN] Why truth? And... by (
- 15 Mar 2012 11:01 UTC; 2 points) 's comment on Cult impressions of Less Wrong/Singularity Institute by (
- 21 Apr 2011 1:03 UTC; 2 points) 's comment on Insufficiently Awesome by (
- 11 Jul 2011 16:49 UTC; 1 point) 's comment on Wanting vs. Liking Revisited by (
At the LW meetups I’ve been to so far, I’ve seen what I would call ‘swarming’ around each female present. It doesn’t seem malicious, but they each end up being in the center of a group...
I guess this is something for other people to corroborate, I’m just a lonely data point waiting for my line.
Edit—please disregard this post
Hasn’t happened at the (4) meet ups I’ve been to.
I don’t think we’ve seen this in London, but obviously our actual female participants would be better placed to comment.
I didn’t see it happening while I was there.
That’s good to hear—thanks! Very keen to hear feedback on this sort of thing.
I’ll confirm that this phenomenon exists; I routinely participate in such “swarms”. I do not know to what extent this is actually a problem, though.
I’ve been to 1 meetup with 5 participants, of which one was female (and married to another participant). So I don’t really have much relevant data yet. My guess is if this sort of thing shows up, it happens with larger meetup sizes.
I’m not sure it’s avoidable, though. I think the best improvement vector is to try to decrease the creepiness without trying to decrease the interest.
How many meetups have you been to and seen this? Do you think it produces negative effects; if so, what?
More data, yum yum. :D
I can postulate, based on past experience (not with LW meetups). It depends on the person. Some people like a lot of attention and find it energising… some people don’t and can find it overwhelming and exhausting. If a person finds it overwhelming and exhausting they may be turned off coming next time.
Should this be added to the “community” sequence?
It also has a more subtle and counterintuitive failure mode. People can derive status and get much satisfaction by handing out perfectly honest and well-intentioned advice, if this advice is taken seriously and followed. The trouble is, their advice, however honest, can be a product of pure bias, even if it’s about something where they have an impressive track record of success.
Moreover, really good and useful advice about important issues often has to be based on a no-nonsense cynical analysis that sounds absolutely awful when spelled out explicitly. Thus, even the most well-intentioned people will usually be happier to concoct nice-sounding rationalizations and hand out advice based on them, thus boosting their status not just as esteemed advice-givers, but also as expounders of respectable opinion. At the end, you may well be better off with a rash “who is he to tell me what to do” attitude than with a seemingly rational, but in fact dangerously naive reasoning that you should listen to people when they are clearly knowledgeable and well-intentioned. (And yes, I did learn this the hard way.)
Things are of course different if you’re lucky to know people who have the relevant knowledge and care about you so much that they’ll really discard all the pious rationalizations and make a true no-nonsense assessment of what’s best for you. You can expect this from your parents and perhaps other close relatives, but otherwise, you’re lucky if you have such good and savvy friends.
An example would help this comment.
You can take any area of life where you could be faced with tough and uncertain choices, where figuring out the optimal behavior can’t be reduced to a tractable technical problem, and where the truth about how things really work is often very different from what people say about it in public (or even in private). For example, all kinds of tough choices and problems in career, education, love, investment, business, social relations with people, etc., etc.
In all these areas, it may happen that you’re being offered advice by someone who is smart and competent, has a good relevant track record, and appears to be well-intentioned and genuinely care about you. My point is that even if you’re sure about all this, you may still be better off dismissing the advice as nonsense. Accordingly, when you dismiss people’s advice in such circumstances with what appears as an irrationally arrogant attitude, you may actually be operating with a better heuristic than if you concluded that the advice must be good based on these factors and acted on it. Even if the advice-giver has some stake in your well-being, it actually takes a very large stake to motivate them reliably to cut all bias and nonsense from what they’ll tell you.
Of course, the question is how to know if you’re being too arrogant, and how to recognize real good advice among the chaff. To which there is no easy and simple answer, which is one of the reasons why life is hard.
I agree. I think that the grandparent is useful, but I’m a bit fuzzy on exactly what mental levers it’s telling me to pull and why to pull them.
The problem of getting good data on how other people see you is a topic I’ve been thinking about a lot lately. I’d love to see a top-level post on this, because I think it’s pretty essential for many areas of self-improvement, and I’d write it myself but I don’t think I have a clear enough idea of the problems involved. I didn’t think about this particular failure mode, for example.
Alternatively, are there any other resources that can help me get this information?
On the off chance this will be spotted in the sidebar: I’m a couple years late responding, but has anyone written anything useful on this subject? Is anyone in a position to do so?
Getting a correct model of others’ models of oneself, and knowing it’s correct, seems ridiculously difficult to me.
I agree that this is a difficult problem. It seems to be that way because the incentive structure is misaligned for truth. The costs of giving someone unbiased feedback are mostly paid by the giver of the feedback, but the benefits are mostly received by the receiver of the feedback. Thus, this is very difficult to get from people who are not close friends and allies- but those people are probably ones who have an above-average view of you.
Thus, one of the low-hanging fruit here is rewarding negative feedback, which is in many ways more useful than positive feedback (and yet most people don’t reward it).
It may be useful to ask people you trust questions like “How do you think other people view me?” The deflection to other people makes it easier to voice their personal concerns under plausible deniability, as well at getting at the question of “how do I present myself to others?” and “what features of my personality and behavior are most salient?”
“Munchkinism” already has a commonly-known name. It’s called hacking.
Yes, let’s please call it “hacking,” or anything other than “Munchkinism.”
Feel free to make concrete alternative suggestions. “Hacking” is taken.
Isn’t ‘munchkin’ sort of taken too? The impression I got from a little googling is that the word as used by RPG players is a derogatory term. Calling someone that isn’t a compliment on their cleverness in exploiting the mechanics but mockery for missing much of the point of the game and being an annoyance to other players.
If that’s true then calling cryonics munchkinism would sound like agreement with people who say that death gives meaning to life or something like that.
The core of the insult is in the framing of the behavior as a negative (and an assertion of higher status of the speaker). The actual descriptive element of the behavior is a pretty close match to what we are talking about. This is perhaps enough of a reason to discard the word and create a synonym that doesn’t have the negative association.
The problem with the MIN-MAXing munchkin—or rather the thing that causes munchkin-callers to insult them is that they think Role Playing Games are about actually taking on the role and doing what the character should do. The whole @#%@# world is at stake so you learn what you need to about the physics and the current challenges. You work out the best way to eliminate the threat and if possible ensure a literal ‘happily ever after’ scenario. Then you gain the power necessary to ensure that your chance of success is maximised.
But the role of the character is not what (the name-caller implies) the point of the game is about. It is about out what the game master expects, working out your own status within the group and achieving a level of success that matches your station (and no higher). The incentive to the speaker is to secure their higher position in the hierarchy and maintain their own behavior as the accepted model of sophistication. Object level actions are to be deprecated in favor of the universal game of social politics.
Many of the same behaviours and judgments apply to life as well. Optimising for whatever your own preferences are as an alternative to doing what you think you are supposed to do. Optimising your behavior for status gain only if and when status gain is what you want or need.
I don’t play roleplaying games myself. I much prefer cards, board games or games that are physical. Both the social aspect and the games themselves are for more fun and the roleplaying just slow, with the ‘role’ of an individual borderline insulting. If I want to socialise I’ll socialise. If I want to hear someone else’s story I’ll read a book. If I want to play a roleplaying game I’ll download one on the computer. If I want to guess the boss’s password I’ll get a job—and I damn well be being payed well for it.
I may be a little biased. The last time I did, in fact, play a group RPG one of my companions thought it was ok to steal something of mine. I gave her fair warning and plenty of chances to comply but I ended up having to fight both her and her and the two allies she recruited (while the other two in the group stayed out of it.) Once I defeated them I took my pick of their stuff by way of appropriate reprisal. It’s exactly what the character I was roleplaying would have done and I wouldn’t roleplay a pushover but at the same time the overall social experience isn’t especially rewarding. I haven’t once had to beat the crap through three of my friends in the real world, which suits me just fine! :)
There is no problem with “Munchkinism.” The problem is that in old RPG’s the rules imply poorly designed (see lack of challenge upon full understanding of the system) tactical battle simulation games with some elements of strategy, while the advertising implies a social interaction and story-telling game without giving the necessary rules to support it. Thus different people think they’re playing different games together and social interaction devolves into what people imagine they would do given a hypothetical situation without consequences (at least until the consequences are made explicit, violating their expectations as you note in your example).
Put all my points into charisma and charm skills and go find me some wenches? Oh, you mean saving the world. Got it.
Actually that is another problem with RPG designs. There are social skills and stats provided but they are damn near pointless in practice. Even when you want to role play a lovable rogue who can charm, manipulate and deceive his way out of problems you may as well put your skills into battle axes. Because the only person that you need to use social skills on is the DM and that is an out of character action. Unless you somehow manage to find a DM who considers the interaction to be about the character trying to persuade an NPC and not the player trying to persuade him and just lets the player roll some dice already.
“What is the skill check for “seduce the maidservant and get her to show you the secret entrance to the castle”?”
… “No, I don’t need to tell you what lines I’m going to use… since I would just have to lie so as to not offend the sensibilities of the company. Dice. I want to use dice and charm wenches!”
… “What? Oh, this is just too much hassle. Let’s do what we know works. Guys, you take the guard on the left and I’ll take the guard on the right. Rescue the princess and kill everything that tries to stop us.”
Of course, what actions players enjoy actually role-playing out, and what actions they prefer to just encapsulate into a die-roll, varies a lot among potential players.
Most RPG systems I’ve seen seem optimized for players who enjoy making tactical decisions (do I wield a sword or cast a spell? do I go through this door or that one, and do I check it for traps before I open it?), and so devote an enormous amount of attention to the specifics of different weapon types but don’t care very much about the specifics of different wench-charming lines.
I could imagine it being different: e.g., the session starts with three or four hours of hanging out at the local tavern swapping stories, and otherwise navigating the tribal monkey politics of a simulated adventuring party, finding out which vendors have the best equipment and give the best deals, bartering with salespeople, etc., etc., etc. … and then everyone rolls against their “explore dungeon” to determine how successful they were, how much loot they got, who died, how many monsters they killed, etc. (“No, I don’t need to tell you which door I’m going to enter through. Dice. I want to use dice and explore dungeons!”)
But I expect they would appeal to a vastly different audience.
The analogy doesn’t fit. The salient difference here isn’t one of emphasis on a different aspects of adventuring. It’s that the bulk of the significant decisions for everything except the tactics boil down to guessing the DM’s password. And that just isn’t that fun. Nor is it compulsory (school) or economically worthwhile (paid employment), the other times that password guessing is the whole point of the game.
The reason the disgruntled charmer had to fall back on tactical combat is because that is the one aspect of the situation over which the players actually have influence. Because no matter how much attention you pay to that aspect it still amounts to trying to guess how some roleplaying nerd thinks you should pick up wenches! Something just isn’t right there.
On the hand designing an entire gaming system around a solid theory of social dynamics has real potential as a learning tool if run by those with solid competence themselves. “Lookt! It’s a 9 HB. They have a 30 second timeout. Quick, use a +3 neghit then follow up with that new 2d8 identity conveying routine you’ve been preparing all week! Let me run interference on the AMOG to hold agro while you establish rapport” (No, on second thought, let’s not go to Camelot. ’Tis a silly place.)
I agree completely with you that “how some roleplaying nerd thinks you should pick up wenches” bears no meaningful relationship to real social dynamics, so it’s all password-guessing.
From my perspective, the same thing was true of slicing swords through armor, raising allied morale, casting spells, praying for divine intervention, avoiding diseases in the swamp, etc. None of those simulated activities bore any meaningful relationship to the real thing they ostensibly simulated.
But I’ll grant that in the latter cases, there were usually formal rules written down, so I didn’t have to guess the passwords: I could read them in a book, memorize them, and optimize for them. (At least, assuming the GM followed them scrupulously.)
Then, of course, there are the actual strategic roleplaying choices. Not the mere tactical ones of how to fight some orcs. The ones where you have to make a choice on where you go next. Roughly speaking you are often best off choosing what the rational course of action is and then picking the opposite. It’s a lot more fun, the battles are both more likely and more of a challenge and you get far more experience! If the DM already has a plan on how long his adventure will take to complete and a rough idea of what you’ll be fighting at the end then the more danger you encounter in the mean time the better. So go sleep in that haunted wood then walk into what is obviously a trap.
Does anyone remember where Eliezer joked about leaving his spare coins around under random objects? He also made a point that in roleplaying games you are usually best served by going around and doing everything else first instead of doing the thing that is the shortest path to getting what you want.
I consider this a symptom of poor scenario design—the availability of unpredictably optimal actions is the key technical difference (there are of course social differences) between open-ended and computer-mediated games. If the setting is incompatible with the characters’ motivations, it’s impossible to maintain the fiction that they’re even really trying, and either the setting’s incentives or the characters’ motivations (or both in tandem) need revision.
Running a good open-ended game in the presence of imaginative and intelligent players is hard. You either leave lots of material unused, or rob the game of its key strength by over-constraining the set of possible actions.
Sure.
Of course, it helps to be clear about what you actually want.
IME most computer RPG designers assume their players want to “beat the game”: that is, to do whatever the game makes challenging as efficiently as possible. And they design for that, clearly signaling what the assigned challenges are and providing a steadily progressing path of greater challenge and increased capacity to handle those challenges. (As you and EY point out, this often involves completely implausible strategic considerations.)
This is also true of a certain flavor of TT RPG, where the GM designs adventures as a series of challenging obstacles and puzzles which the players must overcome/solve in order to obtain various rewards. (And as you suggested earlier, one could also imagine a social RPG built on this model.)
In other (rarer) flavors of TT, and in most forum-based RPGs, it’s more like collaborating on a piece of fiction: the GM designs adventures as a narrative setting which the players must interact with in order to tell an interesting story.
It can be jarring when the two styles collide, of course.
There is far more than a difference of styles at work.
Well, that’s portentous. Is this meant as a back-reference to the things you’ve already discussed in this thread, or as an intimation of things left unsaid?
The former, but I suppose both apply. Either way I thought enough had been said and wanted to exit the conversation without particularly implying agreement but without making a fuss.either. A simple assertion of position was appropriate. While strictly true saying “further conversation would just involve spinning new ways of framing stuff for the purpose of arguing for a position and generally be boring and uninformative” would represent connotations that I didn’t want to convey at the time. The conversation to that point was positive and had merely exhausted the potential. Quit before it is just an argument.
Since you asked.
Yeah, I think roleplayers and writers share the position that sadism is one of the most important virtues.
I read some Ian Irvine a while back—the punishment he deals out to his two protagonists goes through sadistic and out the other side. But on the other hand he did let the pair hook up and have a stable, secure relationship whenever one or the other wasn’t either kidnapped or out alone on the run in the forest with no food and probably a broken leg. I didn’t quite make it through the series but I assume they lived happily (albeit in intermittent agony and constant adversity) ever after. So he’s just sadistic, not cruel. :)
Like Melf’s Minute Meteors doing fire damage. Those things are still supercooled by the time they hit the ground. Those trolls should be fine! (Until you use Melf’s Acid Arrow).
That’s an issue that traditional-game GMs go back and forth on all the time—some say “but it’s more interesting if you role-play it out”, and some say “but you’re not making the fighter actually stab people when he wants to make an attack”. Personally, in that sort of game I like to have players in-character to an extent, but their social stats should be the thing that determines their character’s success at social tasks.
There are a ton of spectacular indie games that deal with this in other ways!
Wuthering Heights Roleplay is just incredible. Your main stats are Despair and Rage. There are general rules for matching tasks to stat rolls, and specific rules for Duels, Murder, Art, and Seduction. The general trajectory of a game is: a bunch of terrible people obsessed with their own problems (or rather, their Problems) start falling in love with each other and making dramatic revelations, until eventually they’re all hacking each other to pieces. It’s a fun evening.
The Mountain Witch is another interesting one. All “conflicts” are decided by a simple roll-off, d6 versus d6. You get more d6s (keep the highest) if you’re working with other people. Players keep track of how much they Trust each other player, and you can spend someone else’s Trust in you to help them out with bonuses in conflicts, to gain control of the narration of the outcome of their conflicts, or to give yourself bonuses when directly opposing them. (There’s a lot more to this one, but that’s the gist of the conflict mechanic.) Cool stuff.
What, no d20s? Or even a d8? Where’s the geeky fun in that? P
Instead, you use poker chips to represent trust! I find that appealing somehow...
Yeah, that one just uses d6 - though there’s an interesting “duel” mechanic where you and an opponent roll secretly, then decide together whether you’ll each roll again—to emulate two ronin staring each other down before deciding the battle with a single cut. (The game has a very specific setting—you’re a group of ronin who’ve been hired to climb Mt. Fuji and kill the Witch (though he’s a dude?) that lives on top. You all have secret ulterior motives! I think it’s been adapted to similar scenarios such as bank heists.)
Uh, Wuthering Heights uses d100, and you roll under or over your Despair / Rage depending on what you want to do. For example, killing someone means rolling below Rage (easier to do the angrier you are), whereas noticing other peoples’ feelings and stuff requires a roll over Despair. Ooh, plus, if you’re into nerdy dice-related stuff, there’s a big Random Table of Problems, like “You are an alcoholic”, “You are a homosexual”, “You are Irish”, “You are in love with a member of your family”, or “You are a poet”, and everyone has to roll d100 a few times to get their Problems.
Oh! And if you just want to chuck lots of different kinds of dice around, you can’t go wrong with Dogs in the Vineyard—where you play itinerant teenage pseudo-Mormon enforcers of the faith in a west that never was. All your traits have some amount of dice of some size next to them, and when they come up in a conflict, you roll them into your pool and can use them when raising / seeing. For instance—possessions of any sort are 1d6, 1d4 if they’re sorta worthless, 1d8 if they’re excellent (criterion: in order to be excellent, a thing has to be good enough that people might remark on how excellent it is), 2 dice if they’re big, and an extra d4 if it’s a gun (so a big, excellent pistol is 2d8+1d4).
Huh, kinda geeked out there. ^-^;
Ok, poker chips qualify as a legitimate nerd-coolness alternative. I’m convinced. :P
A Mormon with a deagle
you know that’s unheard of!
Hah! And of course, this being a roleplaying game, I defy you to find a player who won’t take a big, excellent gun.
The other problem with Munchkinism is that, once your character actually achieves godlike power by breaking the game system, there’s no actual challenge left. It’s like solving a Rubix Cube by peeling the colored stickers off of the sides and sticking them back on in the “solved” position; there’s really no point to it. So you self-handicap and choose to play a character that isn’t Pun-Pun.
Munchkinism is definitely not the same as cheating. You don’t break the Rubix cube physics, you work within them. A munchkin probably would google “solve rubix cube” and then apply a dozen or so step algorithm that will solve the cube from any given starting configuration. In fact peeling the stickers isn’t even cheating properly. The result doesn’t even constitute a solved rubix cube. It constitutes an ugly block that used to be a rubix cube. It is far better to simply dismantle the cube and click it back into place correctly. (This is actually necessary if some clown has taken out one of the blocks and swapped it around such that the entire cube is unsolvable. A cruel trick.)
A legitimate challenge there is to set yourself the task of solving it without external knowledge. The one I would go with (if I was interested in playing with the cubes beyond being able to solve them all at will) is to learn to solve the cube blindfolded. You get to look at the cube once for a couple of seconds then you have to do the whole thing by touch (and no, there is no braille there to help you). As a bonus this is exactly the sort of task that grants general improvements in mental focus!
That’s the kind of thing I like to demonstrate once in principle and then propose a rule change. My usual example is that of playing 500 and the open misere call. I usually propose something of a limitation on frequency of misere calls (and allow any 10 call to beat it). If the other players don’t want the limitation I proceed to play open misere every time it is rational to do so (about 1⁄4 hands, depending on the score at the time). And ask them if they have changed their mind every time I win.
I like self-handicaps. At least in the form of giving yourself a genuinely challenging task and then trying to overcome it. My character selections (in RPGs when I have played them and in CRPGs) tend to be based on novelty or and emotional appeal. All the choices after that can be made intelligently.
RPGs are kind of a weird case; they’re not “games” in the same sense as a competitive game, because there’s not one fixed thing specified in the rules that you’re supposed to be maximizing (this is part of why I don’t play RPGs :P ). With those you start getting derogatory nicknames for those who don’t do everything possible to win (e.g. “scrubs”). Though I don’t know of any short term for those who do (aside from just “people who play to win”), except in the context of Magic where they’re known as “Spikes”. Of course, if we’re speaking of “winning at life”, there it is also not clear should to be maximized! People aren’t very good at knowing their own goals. So that’s something of a disanalogy.
Duh, winning.
Twice.
Munchkinism’s more vivid in my mind. Then again, I love to make up new words.
On the other hand if ‘real’ munchkins were anything like ‘munchkins’ in this sense they would have taken a level in badass then dealt with both the wicked witches themselves without waiting for a fortuitous outside intervention. And made the yellow brick road straight.
It’s not the same thing. Picking locks is a hack. Cryonics is something more, which is why even most people who can pick locks don’t go for it.
I wonder if it’s accurate to say that for hacks, it’s the means that’s considered “cheating”, whereas for cryonics, it’s the end itself that’s considered “cheating”.
That seems like a good distinction between Munchkinism and Hacking, as I’ve seen them used by their respective cultures. Munchkinism is about using the rules to accomplish an “unacceptable” goal, whereas Hacking is about accomplishing acceptable goals via “unacceptable” methods. Thank you for helping me cement why the two terms felt like very separate ones :)
http://home.netcom.com/~shagbert/pages/munchkins.html :
No, it’s quite the same thing.
-- rms, “On Hacking”
Does not the bolded section describe cryonics? Isn’t death a “silly rule”? I think your sense of the word “hacking” is too strict.
As another example, the Jargon file has a general definition of ‘hacker’:
That seems to fit pretty well.
It certainly fits ‘hacker’ (and myself) well. It doesn’t fit people who are indifferent to intellectual challenge but just want to live (and so do cryonics) or just want to win (and so min-max the @#%$ out of life).
“Min-maxer”. Now that could be a reasonable label.
Slang meaning’s very similar to ‘munchkin’, but doesn’t make people who aren’t gamers think of fairy-tale midgets. Sounds good to me. It’s also got decision theory connotations as a bonus.
Min-maxing connotes being extremely good at some things by being extremely bad at some others (the “min” part), so it’s not quite the right fit.
Recognizing things that are not worth putting effort in as well as things that are isn’t a bad thing, given that there is an infinite number of skills you could use your time to get good at. Such as shaping gravel into mounds exactly 17 centimeters tall, memorizing telephone directories, or playing chmess competitively.
Okay, but that’s not what defines munchkins. Munchkinism, as I see it, is less about getting points in good areas by burning points in bad areas (min-maxing) than it is about getting points in good areas by burning the spirit of the game.
I think that willingness to burn the spirit of the game when it comes to things like signing up for cryonics instead of confronting the inevitability of your mortality, drinking extra-light olive oil instead of trying to diet by sheer willpower, or building a recursively self-improving AI instead of trying to solve the world’s problems the normal way, is exactly what distinguishes Munchkinism from mere hacking.
--game theory connotations, to be specific.
Upvoted for this:
I think your model of the space of human attitudes is insufficiently nuanced. I like the ‘hacking’ attitude as well as that which is being referred to as being a munchkin but they are two complementary features.
The part of your quote that you didn’t bold is loosely relevant to the distinction.
Though I agree “hacking” isn’t quite the right word,
Only under the most anemic criteria. Overlap between lock pickers and hackers does not make lock picking hacking. Using something not-as-it-was-intended is the start of many hacks, but a simple reversal of function or overcoming of function by itself doesn’t make the cut.
I am sorry, are you really saying that people don’t go for cryonics because it is something more than a hack? I find this causal relationship hard to believe. Do people in general don’t go for things that are more than hacks?
I think it is a lot more likely that people don’t go for cryonics because it is weird, and maybe because it requires thinking about your own mortality. And most importantly, because it is not the default option.
Probably a third cause. There is some reason why people don’t go for cryonics, which is also the reason it isn’t a hack. Too weird might be that cause.
Not all hackers go for the same type of hacks.
I get the impression that you draw the distinction between ‘hacking’ and ‘munchkining’ as “They both work, but would the average guy think that it’s clever or dismiss it as crazy / unfair / uncustomary?” Am I correct?
Not really. It involves the ability to do things that would make other people look at you funny, and a relentlessly optimizing attitude toward all of real life and not just computer science problems or particular locks. There may be something more to it, too. In any case Timothy Ferriss != John McCarthy (albeit McCarthy himself may also have the Munchkin-nature) and people who build championship Magic decks don’t think in quite the same way as great programming hackers, though you can also be both.
So, new attempt:
Hacking = figuring out clever ways to circumvent [apparently] tough problems
Munchkining = constantly identifying which resources are truly relevant, and then actually abiding by that assessment. Or, as a Magic legend once said, “Focus only on what matters.”
Closer?
A hacker is just a satisficer that places little value on a norm or norms. A munchkin is an optimiser.
Removing one constraint allows a satisficer to achieve better results on all the other constraints; by contrast, an optimiser will violate as many constraints as it takes to get the best result on the optimised criterion.
I came back to ask a similar question. I would not call the issue of choosing cryonics more then a hack. I think it is the difference is that a hacker is often some one who has optimized well in a narrow area while a munchkin will look at the whole system and optimize it and constantly look for new rules to exploit. The difference I see EY drawing is one of local optimization verses global optimization(or at least an attempting to).
Munchkinism in gaming is also generally connected to zero-sum games, so when gaming against a munchkin you either lose or have to out-munchkin them. The meaning is generally pejorative because more collaborative games tend to become unfun at this point. I’ve never seen gamers who do odd and massively optimized things that aren’t intended for winning zero-sum games, such as building a working CPU in Minecraft, called munchkins. There always seems to be the aspect of outshining the other players within the game in munchkinism.
The analogy of this to the social sphere might be why Tim Ferriss got flak from his 4-Hour Workweek.
I’ve been having some sort of half-formed thoughts recently that this has brought back into my foreground that I’m curious to see other people’s thoughts on.
It seems to me that the likelihood is quite high that there are people on here who have inherently competing utility functions (these examples were chosen merely because they are fairly common, directly competing, not obviously insane sets of motivations. I intend no value judgment on either of them). Thus, making one of the people whose utility function is dramatically different from yours more rational could be an extremely counterproductive move for you to make in terms of satisfying your own utility function. Imagine a libertarian rationalist accidentally training a socialist guerilla, who goes on to be very successful at fulfilling his own utility function, and thus dramatically harmful to his teacher’s. Or perhaps more realistically, a socialist teacher trains a libertarian who goes on to found a company that does business in the Third World in a way that the teacher disapproves of.
How would we avoid this? Should we avoid this?
A few months I ago I was roundly, and rightly, rebuked for suggesting that rationality will lead you to certain political positions. On the other hand, people have also presented the idea that being rational will lead you to value various “instrumental ethics” I believe was the term? I can’t find the article right now, unfortunately. Do you (this is directed at everyone) believe that simply by making people more rational, we’ll make them more likely to do things we approve of, in the sense that they further our utility functions?
In other words, if my opponent begins to make choices that better optimize their goals, do I gain or lose?
It seems clear that the answer depends on how many of their goals I share, how many I oppose, and how much I value the shared goals relative to the opposed goals.
Suppose we are Swift’s Big-Endians and Little-Endians, who agree on pretty much everything that matters (even by their own standards!) and are bitterly divided over a single relatively trivial issue. If one side is suddenly optimized, everybody wins. That is, the vast majority of everyone’s current goals are more effectively and efficiently met, including those of the opposition.
Sure, the optimized party gets all of that plus the value of having everyone open their eggs on the side they endorse… which means their opponents suffer in turn the value-loss of everyone opening their eggs on the side they reject. But they will be suffering that value-loss in the context of an overall increase in their value. I’m not saying everyone wins equally, just that everybody wins. Whether they are happy about this or not depends on other factors, but they seem pretty clearly to be better off.
In that scenario, upgrading my opponents means I win, although upgrading my allies means I win more.
(Of course, it’s possible that both kinds of Endian conclude that they get more of what they want by self-modifying to stop caring so much about peeling eggs, and then work out ways to do so. One person’s “value” is another person’s “bias.” But that’s another discussion.)
By contrast, if instead of Endians we have more fundamentally opposed opponents… say, aliens who want to modify planets like Earth to have cyanide-rich atmospheres so they can colonize, whereas we would prefer to have more oxygen-rich atmospheres which are toxic to the aliens.
In a case like that, optimizing our opponents means they get a larger share of the available worlds (either through better negotiations, or winning wars, or more efficient exploration, or whatever) and in the long run dominate the galaxy. If we’re at a point where planetary surfaces really are the most valuable thing in play, then they win and we lose.
(Of course, it’s possible we both conclude that we get more of what we want by self-modifying to breathe whatever atmosphere the planet happens to have. But again, that’s another discussion.)
Coming back down to Earth, then: I guess the question is, how many existing group-level conflicts among humans are primarily superficial conflicts among groups whose shared goals dwarf their opposed ones (“Endian” conflicts), and how many really are deep conflicts among groups whose opposed goals dwarf their shared ones (“oxygen-cyanide” conflicts)?
I don’t know, but I would be surprised if a significant number were non-Endian.
If that’s true, then in general optimizing everyone, even my opponents, leads to everyone being better off, even me. Not because everyone immediately realizes that I’m right and they’re wrong, but because most of us already agree on the overwhelming majority of our values.
OTOH, that might be false.
I really hope that this is the case, but I don’t think that it is. I think that the difference between the hypothetical socialist and libertarian are more dramatic than the difference between a Big-Ender and a Little-Ender. Consider this situation:
All of humanity consists of 100 people, starting at utility 10, and a random one of them is given this choice: either keep things the way they are (everyone has 10 utilons, total of 1000) or one person, at random, is given 990 utilons while everyone else loses 9, so one person will have 1000, and everyone else will have 1, for a total of 1099~11 per person. The expected utility of the latter option is higher than the first so every rational being must pick the latter, right? Though I’ve learned a lot since that conversation and I no longer would make the same points, I still think that an equitable distribution of utility is better than an unequal one. Many people genuinely think it is a wonderful thing to make it so that the world is highly stratified, that there are a whole lot of people who lose in order to have a few people who really, really win. There are also a whole lot of people who genuinely think it is worth sacrificing some amount of “progress” (by which I mean technological innovation, cheapness of consumer goods, whatever) in order to have people’s lives be more equitable. I lie closer to the second camp, but I haven’t pounded my tentstakes into the ground, and even if I have, I certainly haven’t laid a brick-and-mortar foundation, so I can uproot fairly quickly. I understand the logic that comes to the former conclusion; I think it just starts from different premises than the people who come to the latter (though of course there are crazies who come to both but that goes without saying). It does seem to me, however, that the two actually are fundamentally irreconcilable in very important ways. I hope I’m wrong about that, but it really seems like I’m not...
edit: Certainly arguments like “ought gay people/mixed race couples be allowed to get married” seem more like arguments about egg-peeling, and so your strategy hopefully would work there
Absolutely agreed that the difference between “I’m worse off than I was, and you’re better off” (as in your example) and “I’m better off than I was, and you’re much more better off than I am” (e.g.; we start off at 10 utilons each, a randomly chosen person gets +1010 utilons and everyone else gets +10 utilons) matters here.
I’m talking about the second case… that is, I’m not making the “maximize global utility” argument.
This has nothing to do with inequity. The second case is just as unequal as the first: at the end of the day one person has 999 utility more than his neighbors. The difference is that in the second case his neighbors are better off than they were at the start, and in the first case they are worse off.
As for whether one or the other real-world cases (e.g., socialist/libertarian) are more like the first or second; I don’t really know.
Maybe after we all become measurably more rational, we can start to talk about politics without mind-killing?
Under those circumstances, maybe we’ll find that the socialists and libertarians can find more common ground?
Politics is mind-killer because the category itself is difference-killing. You just don’t discuss politics, you discuss global problems and how to solve them. Without even entering in categorization beforehand...
I don’t actually want to discuss politics. I realize that I hate politics. But I love talking about public policy. But discussing (e.g.) tax policy or monetary policy seems to automatically shift the conversation into politics. And then the yelling begins.
That was exactly my thought… so you need to extract the problem that tax policy or monetary policy are trying to solve, contextualize it and maybe even translate it into a metaphor… that should be enough for rational mind to start discussing rationally...
Survivors and cult historians alike agree that this post, combined with the founding of the “rationalist boot camps”, set in motion the sequence of events which culminated in the tragic mass cryocide of 2024.
At every step, Yudkowsky’s words seemed rational to his enthralled followers—and also to all outside observers. And yet, when it became clear that commercial pressures were causing strong AI to be deployed long before Coherent Awesomeness Extrap-volition Theory could be made mathematically rigorous, the cult turned against itself.
One by one, each member’s failure to invent and deploy Friendly AI before IBM-Halliburton turned on its Appallingly Parallel Cheney Emulation Cluster was taken by the feared Bayes Tribunal as evidence that they were insufficiently awesome, and must be ejected from the subterranean bunker complex. With each Bayesian update, the evidence that the cult’s ultimate goal could not be achieved was strengthened—and yet, as the number of followers fell, the more Yudkowsky came to fear a fate worse than death—exploring the possible endings to his life within the simulation spaces of Cheney’s mind—in a game-theoretic reprisal for his work on Friendly AI...
In desperation, he announced his greatest Munchkinism yet—the cult would commit mass quantum suicide by freezing. He convinced himself that only a Friendly AI would commit the resources to resurrect them; hence they would force themselves into a reality branch where a Friendly AI emerged by sheer chance before IBM-Halliburton could eat the world.
The final 150 acolytes tragically activated their decapitation/freezing mechanisms minutes before the Cheney cluster uttered its historic first and final edict—“I’ve changed my mind—get me out of here”...
Like Einstein’s brain before it, Yudkowsky’s brain became the object of intense interest from neuroscientists. Slices were acquired by various institutes and museums with suitable freezer facilities, and will be studied and viewed by the public until medicine works out how to revive him.
Excerpts from “Rationalism—The Deadly Cult of Math and Protein” (Amazon-Bertelsmann, 2031)
Erm, maybe my standards are too high, but this didn’t seem overwhelmingly well-written as fiction and I really worry when material that attacks a target that’s supposed to be attacked gets a free pass as art. Or maybe you all actually enjoyed that, and I’m being unreasonable in expecting blog comments to meet publishable quality standards.
This got a few chuckles from me, but I have found that fiction in which present-day issues escalate implausibly into warfare is a strong indicator and promoter of affective death spirals. You do realize that this story features prominent falsehoods that people actually believe, and is completely absurd in ways not inhereted from the things it’s satirizing, right?
I spent most of January 1990 (I think that was the month) reading the entire run of Astounding/Analog from 1953 to 1985. That was better than quite a lot of the extrapolations therein. Anthologies of the best modernist SF gloss over really quite a lot of the awfulness that was actually published, even in the best magazine …
Sturgeon’s Law: Ninety percent of everything is crap.
Well, yeah. But boy did I have it brought home to me.
In other words, the “Special Committee” will result in slow evaporative cooling?
Or in this case, evaporative freezing.
I voted this both up (for cleverness) and down (for distracting from actually important discussion).
I voted it down for decidedly non-clever thinking about quantum suicide and a complete misrepresentation (or misunderstanding) of rational thinking. It attributes to Eliezer the complete opposite of the ‘Shut Up And Do The Impossible’ attitude that Eliezer is notorious for.
I voted it up for (I assume) cleverly, satirically representing views other people might have about the group that sound plausible to the mainstream.
The idea of a mass quantum suicide might seem paradoxical, but of course the cultists used a special isolation chamber to prevent decoherence, so they were effectively a single observer.
That is even worse thinking about quantum suicide and further still from likely Eliezer beliefs. Eliezer endures criticism for being too liberal with his mocking of certain beliefs about QM, of which the one you are relying on is a part.
In that order?
I didn’t actually click any buttons, so I’m not sure it matters. If I were to assign a value to this post, it would be along a multi-dimensional access and tilt sideways in a direction that is negative for the purposes of Less Wrong but positive for my personal enjoyment of life. (It’s less negative to Less Wrong than it is positive to my person utility, but when multiplied out the negative-value to Less Wrong may produce more overall negative utility).
Of course if you hadn’t lied about voting it would tell us the probable final state of the recorded vote.
I think you are taking both the original post and my response more literally and seriously than they were intended. I didn’t lie. I joked.
I find it interesting how many people here (including myself) assumed you literally voted it both up and down. I rather liked the idea myself, since I hadn’t even considered that set of actions.
I’m also curious now, whether that action would be functionally different from abstaining. I’d assume it eats one point of your “downvote capacity” and nothing more, but I could see a system where comments get flagged as “controversial” due to lots of votes in both directions (I even recall a “controversial” flag in the code somewhere...)
And for extra irony it is interesting to note that I wasn’t one of them and it didn’t even occur to me that it would ever be taken literally. I make the same criticism/compliment myself from time to time and don’t actually click anything given the technical equivalence. Actually voting up and down is an optional extra for those with a truth fetish.
tests Per another commenter, the second vote seems to supersede the first vote, so they’re actually not technically equivalent, interesting :)
That said, I didn’t put any great weight in it being literally true, nor am I offended that it was a joke. It’s the sort of joke I’d make myself; it just seemed slightly more likely/interesting[*] that it was meant literally :)
Yes, you have to click the second one twice. ;)
Based on the karma for my last comment (-2), I’m hoping someone simply forgot that step :)
I’m not sure what happened to the voting in this thread. I assume someone took offense at the whole conversation. Never mind.
It looks like it just registers the more recent vote, if by “vote up” and “vote down” you mean pressing the buttons labeled as such. Clicking on the same button again retracts the up/down vote.
This is my understanding from fooling around with the vote up / down buttons, there may be hidden behaviors.
This is correct. It is just a three state toggle. Up, null, down.
That I replied to a literal aspect does not mean I failed to comprehend the spirit behind the common reddit jest about simultaneous up and down votes—and given the triviality the word lie isn’t an accusation to be offended at. Perhaps I could have gone with “Lies! :P” to make the non-serious unmistakable.
Following along within the make believe reality of a jest while the actual topic is in the background is play and “I didn’t really so it doesn’t matter” is dropping the ball—in the counterfactual jest reality it does matter. It is good form to let others run with what you started and forcing the original frame is what makes things serious.
Upvoted for amusement value.
This section is a little confusing to me, so I’m going to lay out my thoughts on the subject in order to help myself organise them and to see what other people think.
I do attempt to improve myself by thinking “what shall I do to Improve Myself today?” Or rather, I spent several days coming up with plans as to how to improve myself, and now every day I ask myself “what’s on the plan for how to Improve Myself today?” I’m also constantly revising the plan as I gain new information or think of new things. I’ll give two examples. I’m revising the plan right now so that the next time I have a full day of free time, I shall spend it learning to solve a Rubik’s Cube, because my brain considers “able-to-solve-Rubik’s-Cubes” as high-status, and so I suspect that doing so shall be helpful in building my own success spiral. I thought this because I noticed that I was beginning a depressive death spiral. Due to my failure to get into graduate schools with funding, if you’re curious (another part of the plan: admit embarrassing facts like that in order to eat away at the shame, so that my anger about them becomes cold, not hot, so that I can use it. [a third part of the plan: read Ender’s Game. I have started, and 10 year old me is screaming across the decades “HOW DID WE NOT READ THIS WHEN WE WERE ME”]). Heh, got kind of sidetracked there, let me get back on topic.
I have done at least one thing that I can think of that I actually did do by, on the day I did it, thinking “how shall I improve myself today?” It was to start exercising. If you’ll indulge me, I’d like to share the specific bits of rational thinking I did. “Hey bgaesop, what we did the last time we wanted to exercise, and therefore the first thing you thought of when you thought ‘how should I work out?’ didn’t seem to work so well. Furthermore, we’ve encountered credible evidence since then that that is a dumb way to work out. You have access to people who know how to exercise better than you do, through your fraternity. You should go ask them how to do so. We also know that we have a tendency to procrastinate and never start things that we can start at any time. Therefore, you should go ask your friends right now.” As a result, I am now on a regular exercise schedule, doing free weights when I have access to a spotter, machines when I don’t. I think that the key aspect of getting this to work was twofold: admitting that someone else knows more than me on the subject and I should ask them for help, which is an ability that does not come naturally to me and that I have been working on for quite a while, and second, going out and Getting Crap Done the moment I thought of it.
I think that, for me at least, putting a plan into motion as soon as you think of it, instead of procrastinating, is extremely helpful in terms of actually Getting Crap Done. Especially plans that have high initial willpower and shame etc costs compared to the costs of maintaining them. For example, it was much more embarrassing to go to the gym the first time, when I considered myself a scrawny nerd weakling walking into the jock’s den, not having any idea what to do. After the first time going with my friends I was unafraid to go alone. Now that I’ve gone enough that I can actually see results on my body (which took astoundingly little time, seriously, like two weeks) I look forward to going. In fact, I’m going to go as soon as I post this comment. I’ve been putting it off all day, so this is an effective way of forcing myself to do that. You’ll know I was lying if I comment on anything in the next 45 minutes or so :)
This combines well with the whole idea of having rationality be demonstrably awesome, because my body now looks better than it ever has in the past, including when I was expending more effort and time on a less intelligently put together workout program. By taking the extremely simple steps of “look at what people who know about this subject have to say, okay now find one of them to help you, okay now actually do what they said” worked wonders over my old method of “google until you find something fun and easy sounding that promises to work well.” Upon writing this out, however, I am noticing how much what I did resembles simply appealing to authority instead of trying to figure out the answer myself. I could have sworn there was a specific post about learning from other people as opposed to discovering things yourself, but I can’t find it at the moment—does anyone know what I’m thinking of? Regardless, since I certainly don’t know and can’t do everything, and I know that my inability to admit that (mainly to myself) has been one of my biggest impediments in life, I hope that everyone would agree that there’s no harm in trying to learn from others.
I’m extremely relieved to hear that you and Vassar are worrying about dilution of rationality, but if all you require is reaching the absolute threshold of competence, you may not be worrying about it enough. I think it’s very possible that the best options available to a group in which the average level of rationality is 9 out of 10 are several times as effective on a per-person basis as the best options available to a group in which the average level of rationality is 8 out of 10.
I am not sure that worrying about the perils of growth to the degree you suggest is wise. Given how difficult it is to separate personal dislikes from competence, it seems to me that having a process to identify and remove specific problems (X is scaring off the ladies, let’s train him or boot him) is much better than trying to optimize the group (I have more fun when Y isn’t there, let’s stop inviting them).
I also suspect this isn’t intended to be an ivory tower coterie, but a growing movement- which means you want all people above minimum competence regardless of their current skill level. If you’ve got that sort of growth atmosphere, you’ll eventually get enough people that you can sort, and your immediate group will have more members of the average quality you want.
I completely agree with this comment. I don’t believe anyone is sufficiently epistemicly rational to have reached the threshold of actual competence, which is roughly 17 orders of magnitude more difficult that reaching the threshold of being known as awesome by your peer group. Thousands of men can work 12 hours a day for many decades without producing as much value as a single clever insight.
Friendly AI isn’t solved, the Singularity Institute has like 4 real researchers and none of them are really working on FAI even if some of them have seemingly clever ideas, some people like Mitchell Porter and Vladimir Nesov etc are working on Friendliness or very related problems but not many and it’s disorganized and no one thinks it’s important to actually address disagreements despite all this talk about how disturbed one should be by disagreement, Less Wrong is probably the most rational forum on the web and yet comments that are flat out wrong get upvoted too much, especially about tricky problems like FAI, et cetera.
We would not know if we were significantly below the necessary level of competence to have an important-in-hindsight insight. Hell, even the Singularity is just the opening of the rabbit hole. We could be missing some important things about the relevant philosophy. As a stupid example, the current common conception of the Singularity is “we fill the universe with utilitronium” which might not be nearly the correct framing in a Tegmark ’verse. Our comparative advantage is epistemic rationality, whether we like it or not. The reductionistic naturalistic cognitivist realist philosophy of Less Wrong is not satisfying even if it’s the best thing we have at the moment. I highly doubt that this is the point at which we can be satisfied with our epistemic ability and start moving lots of cognitive resources to building marginally rational communities. Following the leader doesn’t work without a smart enough leader, and there are no smart leaders (even if there are a few smart people).
With sufficient deference given to the more capable, I don’t see a problem with a lower average.
Maybe you’re worried about the phenomenon of 9s receiving numerically many upvotes from easily-impressed 8s, with 10s’ contributions not being understood as widely. I don’t think this is too much of a concern if people who have the best ideas and judgments improve their teaching.
Except that you need to be careful that that lower average doesn’t result in goal dilution.
Disclaimer: I’m theorizing here; I haven’t actually been to a meetup.
...Your writing style in this reminds me of that of Paul of Tarsus. You need to write more of these. One for every LW meetup in a new city you go to.
Hrmm, this makes me think about the Rationalist equivalent of the Bible.
We’d have the Rationalist Old Testament, which chronicles the invention of the scientific method and some of its many successes, like relativity and computers and evolutionary biology. This is obviously the longer of the testaments, in order for its larger subject matter. We learn about many of the facts and rules. This is the basis of the religion “science.”
Then there’s the New Testament. People stand up and say, “Hey, this isn’t quite right. What matters isn’t only the world, but how you think.” And so the new religion of “Rationality” is formed. New insight is given into the actual meanings of the science, and how to change yourself. Let’s have some rationality textbooks as the gospels, Eliezer writings as Paul’s epistles, various other writers as the other epistles. Maybe some Ray Kurzweil as Revelations?
This is going to keep me awake.
This post makes me literally sad.
Living in rural Missouri limits my opportunities to interact with similar awesome-seekers.
I run a meetup in St. Louis, if you’re ever in the area.
http://lesswrong.com/lw/4xl/st_louis_missouri_meetup_now_happening_every_week/
Thanks! I’m actually about an hour south of STL...
What ties you to rural Missouri?
Wife, child, family, friends, business.
that is sad—I know a friend in rural Nebraska who is in a similar predicament (college) and he says if it wasn’t for LW, he might have just concluded that people were just un-awesome.
It is sad that demographics limits potential awesome-seekers. That is another reason why I admire Eliezer so much for making this online community.
Some of this post makes me wonder where Less Wrong sits within social networks. I suspect we have close ties to the BoingBoing-Make ecosystem and may even be part of it.
I would be incredibly surprised if we are contained within it. I would be kind of surprised if we’re contained to 90% within any 10X larger group. There is definitely a large overlap with Hacker News.
I concur. I can’t think of any Internet group I know of that LW overlaps substantially with. I can spot small segments of groups I’m in and around (Wikipedia, RationalWiki) and I’m actively trying to recruit people with an interest in rambunctious philosophy discussion … LW has actually interested me in and taught me about philosophy more than anything I can think of, and I studied the stuff many years ago.
I note fair levels of overlap between less wrong and those in the transhumanist, H+, singularity, & cryonics communities.
(cough) well, yes, apart from them, given it’s run by the SIAI :-)
I feel the same way, for what it’s worth.
Most of the NYLW regulars aren’t HN readers.
I live in New York and have been lurking on this site for a while (plus reading HPMoR of course). This post has inspired me to try to get involved with the NY rationalist community. What is the deal with how the community actually functions? How are often are there meet ups? Other basic, boring but necessary questions?
More info on the NYC meetup group.
“And the more fun we have the more people will want to join us. That last part is something I only realized was Really Important after visiting New York”
This suggests a strong “I don’t do the people stuff” bias (HP:MOR24) which will be one of the many points I address in my upcoming epic “How to save the world” sequence.
Stay tuned on the LW discussion area for this. I think I’ll lose a lot of friends here if I pollute the main LW board with my particular agenda ;-)
Downvote to −10 if I haven’t written a discussion post along these lines in the next 2 weeks (have to sort out my taxes first—boohoo)
[EDIT: unless someone else beats me to the exact same post. But I guess that would be unlikely and funny enough to lose 10 clippies over]
I consider this commitment fulfilled.
http://lesswrong.com/r/discussion/lw/5gy/help_i_want_to_do_good/
I think it’s a cultural blind spot (fun vs. useful) at least as much.
Also, I think maintaining fun is hard, though I’m interested in arguments that it isn’t so hard as all that.
Commitment device recognized.
You can get away with all sorts of stuff if you frame it as trying to save the world. Even altruistic ventures of extremely low expected return are well received.
I’m curious as to whether this comment is descriptive or normative, and whether it’s about LW subculture or society in general.
The “bad” side of this is that “trying to save the world” becomes a signalling game of little real value.
The “good” side is that we should encourage ventures of little expected return, if people are starting to think along the right lines or finally showing a commitment to “doing”.
Definitely entirely descriptive. About LW subculture although it would apply generally as well.
If figuring out how to save the world is your agenda, then I suspect it is a more common one than you think around these parts. Looking forward to your post.
Most trivial nitpick of the week contender:
If you are ‘becoming awesome’ then the trait must already dynamic. I’d perhaps go with ‘concrete’ or just leave out ‘static’ without replacing it. At least I would if I expected the epistle to be declared divinely inspired and made gospel for the next 2,000 years. And this essay does remind me a lot of Paul’s letter to Galatians—in an entirely good way!
“And this essay does remind me a lot of Paul’s letter to Galatians—in an entirely good way!” I second this.
I’m fairly sure that was supposed to read “trust you not to be offended if they’re frank with you.”
Yes. Also,
Please edit; can’t parse. I assume you mean that you cherish the beliefs to the extent that you’re actively promoting them.
I understood that as: whether the beliefs you most cherish shouting about...
I think you’re right; that may even be grammatical. If so, a rare total failure to parse on my part. I guess it wasn’t total—I stopped trying because I thought an editing error was likely.
Anyway, I’d revise it whether or not it’s officially in error.
I don’t normally expect people to edit blog posts for style, though of course it’s a fine thing to do if someone wants.
It’s also important not to stonewall.
Briefly elaborate?
Stonewalling is starting from the assumption that making any change is more trouble than it’s worth, and politely refusing to take the advice as a possibility. Some advice deserves no better, of course, but stonewalling shouldn’t be a reflex.
“Thank you” is always a good answer, and then one takes the advice away to chew over even if one thinks at the time that one is dismissing it.
I just realized why resistance training has been working amazingly well for almost 3 months now, but all my other projects have been failing left and right. My exercise has an actual, independent goal—I want to look good. I’m willing to do whatever it takes to get there. The other stuff I’m doing more for its own sake. Abandoning the “one true method” would spoil the fun.
Consider who might resent a friend’s exclusion from the group, especially if it appears capricious. If there are clear norms and people are emotionally prepared to accept the group’s priorities (who it wants to include/exclude), then the collateral damage of a person’s friends leaving in protest would be taken with relative equanimity by those who remain.
Part of the great trouble of being a rationalist is the great, great trouble of finding like minded people. I am thrilled at the news of such successful meetups taking place—the reason rationalists don’t have the impact they should is poor organization
On the other hand, I really like what Eliezer says about courage. It is one thing to preach and repeat meaningless words about being courageous and facing the Truth, but if we are too afraid to look like a fool in society—who says we won’t be too afraid to speak the Truth in the scientific community?
I’m parsing the last paragraph as “Getting Things Done is important—you can talk about courage, but you actually have to use courage for it to matter.” Is this accurate?
You can start with some truths.
Trying to measure improvement also lets you track it to make sure you’re also improving. Vague unmeasured improvement isn’t particularly convincingly an improvement.
If I were going to join a rationalist club, I would prefer it to be men only. Sexual impulses tend to be counter-productive to the exercise of sound judgment. If we can agree that politics is the mind-killer, the same holds true for sex.
There probably is something to that. Apparently single-sex education is one of the ways that actually work for getting more women into math and engineering, due to less self-consciousness about gender roles when studying as a teenager.
I don’t think it’d be worth the effort for adults though. Socializing people to behave in mixed-gender environments is pretty important in Western culture. Adults are expected to be able to deal with mixed-gender groups, generally are able to deal with mixed-gender groups, and enforcing a single-gender membership would raise the weirdness perception for the club for a lot of people.
Single gender groups for adults would be worth trying as an experiment.
You’re guessing that adults are reliably able to handle mixed groups, especially if the groups are doing activities where the members might have bad memories related to gender roles.
I’m guessing that adults will do mixed groups better than teenagers. Agree that single gender groups would be an interesting experiment.
Can you expand on that?
Suppose that some women had been told repeatedly when they were girls that they were bad at math, and saw boys doing better and getting more attention in math class—they might do better in an all-women math class.
Similarly, if some men had been told as boys that they were less good at relationships than women, they might want to start out therapy in all-male therapy groups, or groups which aren’t exactly therapy—note that PUA is used for dealing with women, but the support structure seems to be typically all male.
In both cases, they would presumably want to build confidence and knowledge and then take both into being comfortable with the other sex.
You are, of course, quite correct.
The disadvantage of excluding women (or men) is far too large. Just like any other rationalist, they have information, experience and perspective that is valuable. And more so than rationalists of the same gender as you, they can share near insight about issues personal to women and far insight into issues personal to men, that is extremely rare to find among men or a group of men. There is a whole realm of gender related affective death spirals that are terribly easy to fall into in segregated gender groups. This applies to almost any other significant culture gap as well (black/white, rich/poor urban/rural etc.).
There’s a common error described some places as “privilege blindness” referring to how easy it is for those who are privileged in some way to go through life completely oblivious to the way the world works for those who do not share that good fortune. This is a classic example of an affective death spiral, and will be a huge potential pitfall for any all-male or even mostly male group.
It might make sense for larger groups with plenty of both genders to have some separate meetings, and that’s a worthy experiment. But keeping apart indefinitely seems extremely unwise.
Parent upvoted even though I disagree strongly, because this is an issue worth discussing and bringing in empirical data.
There are a lot of things that are counter-productive to the exercise of sound judgment. Getting rid of such things largely the point of rationality.
It may be that you are incapable of functioning well around women right now, but don’t you want to do better? By arguing for a “rationalist” group which explicitly cateres to this irrationality, you are already conceding the fight against it.
Sure, if I didn’t have to give up something else. But perhaps it’s a matter of picking and choosing one’s battles.
I’m reminded of how Ramit Sethi won’t accept anyone onto one of his courses who has credit card debt. If you do and he finds out, WHAM! you’re kicked off and your money is refunded.
The lesson I take from that is: “seriously—first things first.”
What would you have to give up?
I usually dislike when other people say this, but “I wish I could upvote this multiple times”.
Whatever you may think about Brazil84′s opinion, this comment is being downvoted unreasonably. He has stated his preferences and the reasons for having them honestly and politely. You may disagree with him as much as you like, but he definitely didn’t commit any fault that would warrant treating him as if he were a troll, spammer, rude, or nonsensical.
I originally downvoted your comment but I’m reversing that to an upvote due to the reception it received. The behavior displayed in response to your comment demonstrated that the problem you mention is, in fact, a genuine one.
I downvoted because I want to see less of the grandparent.
Can you elaborate on what you would like to see less of?
I pretty much agree with Adelene Dawner’s sibling comment here. I will go further and say that I find brazil84′s comment to be exclusionary speech because of its connotations. Now, some commenters here are taking the denotation of brazil84′s comment seriously and disagreeing with it; and I might be among them, if the comment had been phrased like so:
I also downvoted for that reason. I want to see less of people posting without thinking about the results of their comments (e.g. the apparent surprise that Alicorn was offended—that should have been really obvious as a likely outcome). I want to see less of people trying to maximize their own comfort at the expense of others, the group as a whole, or specific subgroups of the group as a whole (I strongly suspect that having two rationality groups, divided on gender lines, will be non-optimal in several ways, and that that non-optimality will probably disproportionately affect the womens’ group, since there tend to be fewer of us here). I want to see less rudeness. I want to see less non-meta focus on gender in the first place, though this particular desire is minor and would not have been sufficient to warrant a downvote on its own.
Exactly. This whole thing looks like someone generalized from one example, didn’t think before they posted, made some errors, and got downvoted—and then some people jump to their defense because he’s in a group that the lesswrong-persona empathizes with.
The comment that started this implies the poster didn’t remember that homosexuals exist, thinks everyone is like himself, and doesn’t know or care about the wider effects of excluding women from his rationality group. There are a lot of incorrect defenses of this, as well as ‘technically correct’ defenses that are being selectively applied (“don’t criticize him because gender segregation is not in general a universally negative idea”, “anything that doesn’t have a universally agreed-upon objective definition is a bad criterion”).
If someone came here and posted that they wanted a whites-only group because white people feel uncomfortable and irrational around black people (the only ethnicity other than white), I’d expect a very different response from the community.
Yes, putting something into the reference class of “things that are politically incorrect to say regardless of whether they are correct” would force a different response than putting it in the reference class of “things that can be considered on their literal merit”.
I’ve noticed that differentiating speech based on nationality doesn’t seem to warrant much protection at all while differentiation based on ethnicity is almost inconceivable. So people who want to play reference class tennis can take their pick.
I don’t object to making something unacceptable to speak of due to the political implications, so long as it is clear that that is what is happening. It is the difference between claiming brazil’s ideal is utterly impractical and has undesired consequences and saying it is offensive even to consider those consequences.
This particular mistake (unlike the other objections you mentioned) does not quite fit. Even if Brazil himself is a perfectly mature person immune to bias the group dynamics still influence his experience. If everyone else is a being an ass it sucks for him. (Of course it would be rather arrogant if his aversion is because he thinks everyone else is not like himself! :P)
He makes a claim about a possible general competitive tendencies associated with various combinations of subtypes of the human species. What he fails to consider is that there is more than one factor at play. Typically all male groups, all female groups, mixed groups and various combinations of gender-atypical groups will produce different kinds of competition. But there is bias inherent in the all-male and all-female groups too (again separating out the dynamics of gender atypical hybrids into your next objection, which I accept). Combining the sexes actually eliminates a whole swath of competition types because you can’t get away with them in a mixed setting. In that way there is potential for the combination to be a stabilizing influence.
Interesting. You seem to be saying that the bias goes the other way—that we’d be irrationally rejecting racially segregated groups? (That is, not that we’d be irrational to reject the proposal, but that we’d be rejecting it without giving it fair consideration.)
I also intended to convey the other errors in that example—I think that in other circumstances, the comment would be considered (recognized?) to be below the standards of discourse for this community.
True! This seems like an unlikely reading of the comment, but not a precluded one.
Good point. Is there any literature on this issue? (Can we call up Lukeprog, the Minister of Citations?)
To be clear: I thought it was a poor suggestion, made in a comment that demonstrated a lack of thought on the issue. If in fact rationalist men abandon their training to butt skulls at the advice of promiscuous monkey ghosts when women are present, I agree that’d be worth studying! But I don’t think it’s worthwhile to exclude women from rationality groups to cater to these preferences without a serious analysis of the drawbacks.
Absolutely not! I would reject the idea of a gender segregated group and most certainly decline to participate in it or associate with it. The downsides are too great, both politically and practically and the advantages somewhat overstated.
What I would say is that it is important (to me) to either be consistent in applying a principle or to be clear about why there is a difference. The privilege of prohibition of exclusion is not universal and depends on the power that the group has claimed. To be clear I am not saying that the decision being political is bad, merely factual. I would apply it myself in this case.
Whenever something related to the gender comes up the result is an ugly ‘sub standard’ mess. (With the only a couple of exceptions to that rule being when HughRistik was the primary participant.) I have a slightly different model of the causal factors at play.
In a counterfactual world where there was no political hotspot over the issue my prediction is that Brazil’s comment would remain stable at either 1 or 2 karma. There would be multiple comments replying to him variously pointing out the signalling implications of establishing such a group, the potential for negative externalities (particularly if only one of the sexes or gender atypical groups does not meet the population threshold to establish all three of all male, all female and mixed group), and an analysis of what the actual social dynamics at play involve. The high quality replies would reach around the 10 karma mark with perhaps one particularly good one making 20.
The problem with Brazil’s comment is that it is insufficient. It doesn’t go in to anything beyond expressing a desire for one thing that would remove a significant source of negative utility to him. That is ok, not every comment has to be an essay covering all the broader ramifications of a potential policy proposal. That is for posts.
In a different counterfactual where Brazil had made an actual policy proposal that we should establish gender segregated rationality groups—or an analogous proposal without the gender hotspot - then he would be downvoted significantly. Because once you make a policy proposal you have made a statement that should have considered all the pros and cons of the situation. But Brazil fell short of that—even if he may actually approve of such a policy he didn’t advocate it.
The difference between what is said and ‘all possible related things for which that statement could be associated’ matters here far more than it does elsewhere. Going from “I’d like to join” to “we should establish” is a rubicon. As soon as something has a ‘should’ or especially a ‘we should’ the suggestion has to be something that I fully agree with or I’ll launch a bucketload of punishment in that direction.
There is literature out there, but my mind is better with concepts than with bibliographies. Lukeprog or maybe Hugh could suggest something. But there certainly isn’t as much literature out there as there ought to be!
Yes, and if that was done without an analysis of how analogous female competitive instincts work then I would claim offense! Because it is not just guys who have evolutionary incentives toward bias.
Nope, that’d be outright moronic. I’d like to think that nobody here with the initiative and influence to establish such a group would be dumb enough to actually do so.
I upvoted brazil84′s comment because I want to see more of such things. To take an extreme example, if Genghis Khan used rationality tricks, I want to know them. A while ago, in a discussion of Schelling’s book, we had this exchange:
Imagine that someday brazil84 finds a way to make rationality training 200% more effective, but it only works in gender-segregated groups. (That was, after all, his stated rationale: to make his own training more effective.) Will you reconsider then? Are you so absolutely sure this is impossible that cutting the discussion short with an Alicorn-style “I don’t like what you just said” is the best response?
Unfortunately, much like on Reddit, I think that a lot of people (myself included, though I am working to correct this) treat the up/down buttons as though they were agree/disagree buttons
There’s some of that, but it seems that “upvote for agreement” is much more common than “downvote for disagreement”, except on hot-button topics (which covers brazil84′ post). Downvoting generally requires disagreement + rudeness or stupidity.
Tell me about it. A newbie can only get enough karma to post by saying things people agree with, Nett result: groupthink.
Nah. Even if you disagree with the LessWrong memes on just about everything, you can easily get to 20 karma with a few moderately interesting Rationality Quotes or some such.
I’ve seen plenty of forums / newsgroups / collective blogs / real-life social circles that developed a powerful groupthink despite the lack of any karma-like mechanic and despite a very hands-off or nonexisting moderation.
There’s far more buggy code in our brains than in our servers.
I don’t think so—comments seem much more likely to end up with positive karma than with negative karma, except on some hot-button topics (politics, gender relations and seduction …). So getting enough karma shouldn’t be a problem unless you’re systematically talking about “unwanted” topics, or write particularly bad comments … in which case, them not being able to post top-level posts is a feature, not a bug.
I had to impose myself the exact same warning. I was trying to use karma point to signal “rationalist status” instead of simply trying my best to comment intelligent things. There apparently is a little segment of my neurology that is constantly scanning what the median groupthink is and prompting me in that direction...
I agree it’s annoying and probably a problem, but I think there’s still less groupthink than on most forums I’ve seen. I do agree that it can definitely be frustrating; I have a post I want to write up on the value of starting things sooner rather than later, and I was all set to start typing it up back when I had 19 karma (you need 20 to make a full post), but then I started posting in this thread, and my karma score drifted back down to a single digit. It’s doubly frustrating because I can’t tell if people legitimately think my posts there are without merit or if they’re just using it as an agree/disagree button. If they do think my posts are terrible no one has said as such.
This is the wrong metric to apply.
Post hoc ergo propter hoc? Looks like you actually came out ahead from that thread, karma-wise.
In fact, I think that thread illustrates LW’s typical reaction to someone with an outlying opinion: initial rejection when it’s poorly put, followed by upvotes when it’s cogently fleshed out & defended. Looks OK to me.
-- or that would warrant mocking him as one participant did (by her own admission).
Let us know how that works out for you.
(It’s not clear to me how this form of rationalism will survive contact with the real world. Are you strong enough to be able to think despite having testicles?)
Perhaps better than some other form. By analogy, if I were teaching a soldier how to fight, I would first teach him how to operate his rifle without any distractions. Later he can practice firing with the added distraction of loud noises, people running and screaming, other people trying to shoot at him, and so on.
For a soldier analogy, how about women in the armed forces? Perhaps the IDF, who have a good reputation for kicking arse in practical tests. (Even if the War Nerd thinks they’re overrated.)
I am entirely unconvinced that a rationalism brought up without this bit of humanity will actually survive its first exposure to air, and your analogy doesn’t convince me. (You are of course entirely entitled not to care if I’m convinced.)
Though there may be, e.g., past data you can point to that shows this as the important criterion.
And, as Nancy Leibovitz points out, an experiment would be worth running.
This warrants for better judgement, not less sex.
So you’d want it to be straight men only. Presumably with the option to create a straight girls’ group. Gays would be able to form pairs only with opposite sex counterparts, and bisexuals would be shit out of luck, is that the idea?
I think you’re being a little unfair to brazil84′s comment. Adding a woman to a men-only group affects all (edit: many, not all) men because they feel an impulse to compete for her. A gay guy won’t cause this reaction.
Policy debates should not appear one-sided. Some mixed gender groups do have downsides, which may be important to some people. In my experience, being in a group with many males and few females feels slightly less comfortable than either an all-male group or an evenly mixed group.
Yes, the vast majority of debates in the space of possible policy debates should appear one-sided.
Except the policy debates that actually come up in real life are not drawn uniformly from the space of all policy debates. The one-sided issues are typically not worth mentioning, simply because they are one-sided.
Exactly. Another way to put it would be—policy debates should not appear one-sided, so long as you do not consider all proposals about policy to constitute policy debates.
(“PDSNAOS” does not mean “people don’t have bad ideas”)
“One-sided”, as I understand it, doesn’t mean that, on net, one side wins by a comfortable margin; it means all the arguments go the same way.
Well, if we’re being picky: for all natural numbers n, let P(n) be the proposal “all future policy decisions should be decided by a sack containing n potatoes”.
I meant that as saying all the considerations for deciding any given issue go the same way, not all issues to be decided go the same way.
Right, but there really aren’t any good arguments for adopting P(n) for any n—none worth considering, at least. And that’s a countably infinite number of policy debates that we don’t need to have!
But that’s an example of “wins by a comfortable margin”, not “all the arguments go the same way”. For example, P(n) is cheap to implement for low n.
No cheaper than leaving out the sack and the potatoes. Do you really think that there are any benefits of P(n) for any n that would justify having a debate over it? I think all the arguments go the same way for sufficiently small values of “all”—that is, it’s “one-sided” enough that it shouldn’t even be brought up.
One reason to bring up argument X against policy P when policy P is clearly better is that there might be a slight modification of P that retains the advantages of P while addressing argument X.
Speaking as a heterosexual male, no it doesn’t. People, even young human males, can be mature enough not to have an impulse to “compete” for every female they encounter.
Describing it as an “impulse to compete” is inaccurate. It’s more like an increased desire to be seen/noticed, that results in increased competition, aggression, and risk-taking behaviors as a side-effect, with the strongest effects occurring when there’s only one or two females, and several males present. (Perhaps a lekking instinct is being triggered.)
Anyway, it’s certainly possible to suppress the behaviors the impulse is suggesting, but merely being aware that one is being biased in this direction is not the same thing as stopping the bias.
In fact, it’s likely to motivate one to try to show off just how not competing you are… i.e., to stand out by making a show of not standing out, by being… “mature” as you put it.
So, if you’ve been priding yourself on being more mature in such situations, it’s probably because your brain selected a display of “maturity” as your strategy for competing. ;-)
IOW, it is a “live fire exercise” in debiasing behavior.
This is depressing.
Question: is this the depressing bit?
(My tentative solution: figure myself out before others do. Then I feel much better about it.)
Wouldn’t what you are describing be happening to some extent on this forum as well?
It certainly can happen in virtual venues, but IME the experience is nowhere near as visceral. Until you mentioned the idea, it actually hadn’t occurred to me it could happen without actually seeing or hearing the people involved.
Then you are unusual. This is a really standard ape behaviour effect.
It still triggers my “wtf” detector, but the single-sex rationalist group experiment may be worth running.
Not just unusual, mistaken about a general claim. Humans (of either sex) behave differently in a mixed group. The social rules and payoffs are entirely different. Not behaving differently would be a mistake, even for those people who can emulate a different personality expression consistently in the long term with no adverse effects. If others are being more competitive you need to push back just to hold your ground.
Mind you I consider rationalist meetups a terrible place to meet women. Apart from being a hassle to deal with all the other guys (and annoying for the swarmed girls) the gender imbalance inflates social value. Basic economics ensures that for a given amount of social capital you can get a more desirable mate at other locations. There are plenty of intelligent and rational women out there that don’t go to rationalist meetups and you encounter them when you are a breath of fresh air and a kindred spirit rather than one of a dozen walking stereotypes.Then there is the unfortunate tendency for people (of either gender) with inflated social value in a specific context to be kind of a pain in the ass.
Writing off that particular social domain could be considered lazy or otherwise low status but I prefer to consider it one of the MIN parts of the min max equation. While it is still necessary to behave differently in the mixed group and be somewhat more aggressive it frees up a bunch of background processing and eliminates a swath of social-political constraints. Although you still have to pay more attention to the approval of the scarce women. They have far more social power and influence than they otherwise would so can damage you by more than just their own personal disinterest. Not that social politics matters much at all for occasional meetups where there is not much of a hierarchy anyway. More of a work consideration.
If we’re not unusual, we wouldn’t be in Less Wrong. We supposedly pride ourselves on being more sane than the average population, no?
“We are unusual” is not a licence to say “We have a significant chance of being unusual in this particular manner that just happens to be convenient to my argument.”
What evidence were you thinking of that this rule does not apply to LessWrong readers in particular?
I wasn’t primarily arguing that it does not apply, more that it might not apply.
As for reasons that it might not apply—for starters, awareness of the issue enough to discuss it. Same way it works with awareness of all other biases.
Cutting out half the potential membership out of a rationalist group seems to me a high enough price to pay (especially given how few we are, especially given the impresison it’d give to outsiders) that we ought consider very carefully how big the downsides of gender inclusiveness really are, in the given situation. Not just say “standard ape behaviour”.
That’s certainly an excellent start. But awareness of and being able to cope with a bias doesn’t make it go away—it takes considerable practice until you’re not just compensating for it. The mind is a very thin layer on top of a chimp—the biases run deep.
I agree that Cousin It’s statement is literally false due to his use of the word all, but given that not all men are perfectly mature in your sense, I expect the essential concern to remain valid: adding a woman to a male-only group will tend to change the social dynamics, in part due to the impulse that Cousin It mentions.
(I mention this for the sake of completeness; speaking only for myself, I think that explicitly single-sex groups are a terrible idea and would not participate in one.)
Agreed. Edited the comment. Sorry.
What exactly is your point here? Are you saying that my proposal is imperfect because it could never remove all sexual distractions for everyone? Are you saying that my proposal is unfair to homosexuals?
I think it would be valuable for you to spell out your argument.
I was mostly mocking you, rather than making an argument. The nearest item in argumentspace translates to something like “if sexual impulses are counter-productive, then non-straight people cannot work in groups without this handicap, which implies that asexuals are magic and gay/bi people are never going to be as effective at collective rationality as straight people who segregate their genders; empirically, asexuals are not magic, and there is no evidence that bisexuals/gay people are less effective in groups than straights who happen or arrange to group themselves with others of the same sex”.
Or, put more simply, brazil84′s model appears to be flawed.
(This doesn’t necessarily imply that the effect he’s claiming to have observed doesn’t exist, though.)
And why exactly were you mocking me if you (mostly) weren’t making an argument? I really would like to know.
My guess is that my post pushed some emotional button with you.
This is a fairly predictable result of telling someone you don’t consider them welcome.
AdeleneDawner is correct. I do not like it when people announce that they wish to form communities I would be unwelcome in because of a “protected” feature (sex/sexuality/race/whatever). (This is importantly different from forming communities based on non-protected features, like willingness to pay membership dues or expertise in a topic, and also importantly different from forming communities in which my presence would be pointless, e.g. I would have no reason whatsoever to be at an Alcoholics Anonymous meeting.)
I would recommend that you make an attempt to free yourself of this dislike. While some ‘protected features’ have few real consequences (skin colour won’t make you out of place anywhere except possibly a tanning salon), many do, gender definitely being one of them. Sometimes it will be easy to ignore or work around those consequences, and sometimes it will not.
If brazil84 does sincerely find that his ability for rational discussion is hampered by female pheromones (metonymy), and you do happen to emit female pheromones, you certainly shouldn’t be blamed for that, but neither should he—it is a weakness of him, but one that falls well within the demands of a modern and open society. While you are absolutely required to behave in a civil manner even in the presence of romantic attraction, it is acceptable if it causes you to perform less than optimally.
As long as he (a) recognises that in a modern and open society, he will eventually need to learn to think and behave rationally even when surrounded by women, and (b) does not by his actions prevent women from joining a rationality club, I do not think he is going out of bounds when he wishes to establish a group that caters to this need of his.
I hate that I cannot come up with a better, less loaded example (I fall into a pretty damn privileged demographic), but: I would understand if I needed to join a rape victim support group and ran into one that wouldn’t let me in because the women members were uncomfortable to talk about these experiences around a 191cm / 90 kg bearded deep-voiced man. Of course, I would find it unacceptable if that were the only support group in my area and it chose to leave me completely in the cold rather than create some discomfort to other members.
Your point with the support group example is well-taken. I will reevaluate and possibly revise my dispositions here.
I have nothing to say against this eminently upvotable comment, of course. It just so happens that (no doubt primed somewhat by a recent conversation with Vladimir_M) I feel the need to take note of a remarkable milestone:
This is literally the very first non-native-English-speaker shibboleth that I’ve ever detected in any of your comments. (The more idiomatic construction is “uncomfortable talking”.) I think I’ve managed to “catch” all the other prominent near-perfect non-natives at some point long before now (Morendil, the various Vladimirs, even Kaj Sotala), but you were the last holdout, by a significant margin. Well done!
What do you think the gender demographics are for lesswrong?
To clarify: I would expect that a men-only rationality club would indeed limit women’s access to rationality clubs.
Can you say a little more about what a “protected” feature is in this context?
That is, suppose I wish to form a community in which Xes are unwelcome, and I want to predict whether you will dislike it if I announce that wish.
I’m pretty clear that you will dislike it if X in {woman, Caucasian, queer, attracted-to-men, attracted-to-women} and not (necessarily) if X in {unwilling-to-pay, non-expert-in-$FOO}.
What about X in {adult, American, non-recovering alcoholic, hearing, sighted}?
More generally: is there a rule I could apply to predict the result for given X, supposing I knew the relevant demographic facts about Alicorn?
I waver on to what extent age is a protected category but normally err on the side of not caring about age-specific community arrangements to the extent that these overlap with pointlessness or have equivalents for multiple age groups. (E.g. a nursing home is not obliged to have children’s classes if they have adult classes, because that would be silly, and a driving school does not need to admit twelve-year-olds, because for reasons that are not the school’s responsibility twelve-year-olds may not drive).
Disability status (sensory, mental, physical, and to a lesser extent disease/addiction related) is protected but interacts strongly with pointlessness. Disease/addiction are less protected to the point where one might exclude alcoholics from a wine tasting on that characteristic alone, or send a kid home from school because ey has a contagious disease which is incidentally causing a temporary disability.
Nationality is weakly protected but interacts (through geographic location and language/culture) with pointlessness and with non-protected features, so tends not to look protected in terms of my reaction to various items.
If I had to describe a rule, I’d describe it in terms of the voluntariness of the feature, the background likelihood that people discriminate against those with this feature (not just the individual likelihood that the community-former is being exclusive for discriminatory reasons), and the difficulty/intrusiveness of hiding the feature should one want to do so. So, for example, race is maximally protected, because it’s completely involuntary, racism is totally a thing humans do, and it’s difficult to conceal.
(nods) I think I follow.
So if you moved to a community where people with your color hair were routinely discriminated against and hair dye illegal, you would dislike the idea of a not-your-hair-color-only club, but in a community without such constraints you would be all right with it?
Actually, I disapprove of hair dye for unrelated reasons, but yeah, you seem to have the gist of it.
What about something like IQ? It satisfies all of your criteria for being maximally protected, but also seems relevant to many tasks. Plus somehow I don’t think you object to the existence on MENSA.
Using an arbitrary and subjective “pointlessness” exception, you can derive any conclusions you like. Just apply the general principle consistently as long as you like the conclusions, and otherwise proclaim that the “pointlessness” exception applies. And voila, you can bask in the glow of your high principles, which just happen to imply conclusions to your complete liking. (Of course, the general principle would produce absurd and impractical results if really applied consistently, so someone who subscribes to it has to operate with some such unprincipled exceptions.)
The distinction between “protected” and “non-protected” characteristics is of course similarly arbitrary and ultimately serves a similar purpose, though you don’t have personal control over that one, as the power of defining it is a prerogative of the state.
That said, I don’t mean to point a finger specifically at you by pointing this out. This mode of thinking is all-pervasive in modern society, and nobody is immune to it completely. But on a forum dedicated to exposing biases and fallacies, it should be pointed out.
It’s reasonable for people to care about how much of the world they’re welcome in.
What is your evidence that my pointlessness criterion is arbitrary?
Such criteria are always arbitrary, since there is no universally agreed definition. How exactly do you decide whether the presence and participation of a person in a group is “pointless” or not? Ask five different people, and you’ll get five different answers. Yes, you can point to extreme examples where almost all would agree, but the problem is that things are often not that clear.
The “protected” criterion is similarly arbitrary, except that the government gets to define that one. Even then, if you ask for a precise definition of when some category is considered protected in a given context, you must refer to a whole library of case law. (And you should also consult an expert lawyer who is knowledgeable about the unwritten norms and informal intricacies that usually apply.)
Unsurprisingly, humans being what they are, when they use such arbitrary criteria to answer problematic questions of law, ethics, etc., what they end up with are rationalizations for attitudes held for different reasons. Again, please note that I’m talking about something that practically everyone engages in, not some personal vice of yours.
Could you characterize what you mean by a “protected” feature?
Most likely one that is difficult to change / not obviously under conscious control.
Aren’t most human “features” of such a nature?
“Most” depends on what set of features you consider, but I would happily agree with “many.”
Your club’s motto: “Humans need not apply.”
That should work great, as long as there’s no such thing as same-sex attraction in humans!
Wait...
Most people are heterosexual. Anyway, you obviously are angry at me from our exchange in the racism thread. Please don’t go around digging up my old posts to respond to just out of anger. Instead, you might ask yourself why exactly you are feeling angry. Could there be cognitive dissonance at work?
Most people demonstrate heterosexual behavior in modern heteronormative society. There is a huge difference between this and the generalization “most people are heterosexual”. In ancient Greece, “most people” (or men, anyway) were capable of having both pederastic relationships and productive heterosexual marriage. I have no data but I’d really like to see some, on how much societal norms affect orientation. Which is itself a relatively new concept.
If “is heterosexual” is determined by “sexually attracted to given gender” and sexual attraction to genders is mostly controlled by the sexual normativity of a society (is this the case? I believe so but I notice I have no evidence) then there is less of a difference between the two than you’d think.
Basically I distinguish “capable of experiencing sexual feelings towards” from “will ever actually have an experience with”, here. It’s like saying that “I’ll, like, never fall in love with a black man” (due to the demographics of my current location) versus “I never could fall in love with a black man”. It seems to me that the logical extension of these principles is that people may be capable of sexual feelings differing from the sexual norms of their society, to a greater extent than deviation already present, but do not articulate, understand, acknowledge, or have opportunity to experience these feelings. (There has to be a more sophisticated way to phrase this than “almost everyone is secretly a little bisexual”, because that of course dramatically oversimplifies the matter and gives the wrong mouthfeel, but.)
I guess “secretly a little bisexual, but due to society’s constraints will never consider or pursue a same-sex relationship” strikes me as heterosexual, not bisexual.
Sexuality is one of those areas where people want an abstract ‘core’ that is held separate and above environmental factors. For example a person may like to believe “I am the kind of person who could fall in love with a black man” and feel that never having fallen in love with a black man is a fact about their environment, not about their ability to love. I was wary of the difference you elucidated being something like “I like to believe that I am the kind of person who would be sexually attracted to both genders if only society was more permitting”.
Well my comment was kinda focused on modern society. I’m not sure how things were in Ancient Greece. Would Socrates or Plato have been particularly distracted if a Greek girl in a short toga had wandered into one of their Socratic sessions?
Probably! My intuition is that your art as a rationalist is most in need when it’s hardest to exercise (HJPEV, I think, possibly also in the Sequences) and that you shouldn’t expect the world to give you the peace of mind to apply all your skill to a question. I can bench-press more weight with a proper safety bar and a spotter, but the real world doesn’t often offer safety bars and spotters so I press less weight, without the bar and spotter, and have a better estimate of my capabilities.
Would the same reasoning apply to noises, like jackhammering, people talking on their cell phones, etc?
Yes. Also to time-pressured situations like a question requiring an immediate answer, and emotionally charged situations.
I disagree, I think it’s better to practice stuff without distractions, at least at first. So for example I wouldn’t prefer to have rational club meetings at a construction site; or to have people talking on their cell phones during the meeting.
That’s unexpected. In this comparison of sex based distractions to construction sites jackhammers turn out to be analogous to breasts. I’d usually expect something different.
It’s not so much a matter of comparison as a matter of applying shokwave’s reasoning to other distractions.
He didn’t make the argument that simply having girls present is only a mild distraction. Instead, his argument is that one should accept distractions because they are present in normal life.
I know, I agree with your argument. (Without supporting sex segregated lesswrong meetups as being a remotely practical idea!)
Ah. The male-only problem is pretty much a permanent decision—it’s a Hard Problem to attract females to an all-male group. So if you had to decide between never any distractions or always distractions… I would pick distractions. Otherwise I feel training caps out too early.
Who cares? It’s like this is a ballroom dancing society. Besides, as a practical matter what I envision is occasional meetings where girls are excluded. I think it would also make sense to have girl-only meetings.
I care. I certainly care about attracting half of humanity to rationality groups more than I’d care about attracting that subset of males that would be significantly distracted in mixed-gender company.
At the start of this discussion you used the term “men-only”. But in contrast you’re consistently using the term “girl” to refer to a member of the opposite gender.
The corresponding term to “men” and “men-only” is “women” and “women-only”. If you’d used “male”, the corresponding term would be “female”. Only if you had used “boy” or “guy” could the corresponding term be “girl”.
Take care of the connotations of your word choices, especially given the content of your suggestions.
If that’s your preference, then I can’t really argue with it. In my opinion, people are too concerned about attracting girls (or women if you prefer) to meetings.
Downvoted for failure to either correct or justify the insulting connotations of your word-choices.
Why do you think that I should care more about attracting easily distracted boys to meetings, than I should care about attracting adult female rationalists to such? To make it more specific, why would I choose to discuss with you, if that meant I had to trade away the capacity to discuss with Alicorn or AnnaSalamon or NancyLebovitz?
Frankly I think that your attitude would be much more distracting to me than the presence of boobs would be.
I wish I could say the same about myself! That is admirable.
It’s just a matter of who you prefer. Anyway, I’m not going to get sidetracked in a debate over “girl” versus “woman.” If you prefer to say “boy,” I can’t control your use of language.
I do NOT so prefer it. My point is that words with bad connotations distract too, unless one fully intends said bad connotations.
Why did you engage in that type of distraction, a distraction you could easily remove without any negative repercussions I can think of?
Serious answer from someone who’s in the NYC group:
in practice, people do care. Both guys and girls are more likely to attend an event with a less skewed gender balance. Gender balance helps attendance at events where there’s socializing.
I don’t know what kinds of topics you’re imagining, but usually we don’t focus on topics that need to be single-gender. Our topics are things like clustering algorithms and prediction markets and TED talks. Nothing you really need to shoo the girls or boys out of the room for.
Not that I wouldn’t mind a Rationalist Hen Party sometime.
Is the goal simply to maximize attendance?
Not at all costs, but it is a goal.
Is the goal simply to maximize attendance?
If there comes a time when there are enough rationalists who want to go to rationalist meetups that there can simultaniously be all male, all female and mixed sex groups then that would work and even have some benefits for some members of either sex who could plausibly be slightly more comfortable. I don’t care if you also include a group for otherkin.
I approve of the ideological stand you are taking. Unfortunately evolution isn’t nearly as open minded. Of all the prevalent trends in human behavior to say “but it could just be cultural” sexual attraction is the most absurd. Evolution cares about babies, not political convenience.
Because it’s not like there’s clear evolutionary evidence for other potential reasons to have sexual attraction, right?
Actually, I came upon this post and this thread while digging around the site for posts relevant to rationalist community. Your post caught my attention while checking the comments thread.
Because not getting laid is one of the main precepts of instrumental rationality...?
(Currently seeing someone I met through the NYLW group.)