Open thread, July 16-22, 2013
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Given the discussion thread about these, let’s try calling this a one-week thread, and see if anyone bothers starting one next Monday.
- Less Wrong Study Hall—Year 1 Retrospective by 18 Mar 2014 1:54 UTC; 69 points) (
- 15 Jul 2013 20:13 UTC; 2 points) 's comment on [META] Open threads (and repository threads) are underutilized. by (
Given our known problems with actively expressing approval for things, I’d like to mention that I approve of the more frequent open threads.
I approve of your approval! I also object-level approve of this thread.
I want to express my approval, too.
Me too, the biweeklies grew too bloated.
While reading a psychology paper, I ran into the following comment:
Besides the obvious connection to Schmidhuber’s esthetics, it occurred to me that this has considerable relevance to LW/OB. Hanson in the past has counseled contrarians like us to pick our battles and conform in most ways while not conforming in a few carefully chosen ones (eg Dear Young Eccentric, Against Free Thinkers, Even When Contrarians Win, They Lose); this struck me as obviously correct, and that one could think of oneself as having a “budget” where non-conforming on both dress and language and ideas blows one’s credit with people / discredits oneself.
This idea about familiarity suggests a different way to think of it is in terms of novelty and familiarity: ideas like existential risk are highly novel compared to regular politics or charities. But if these ideas are highly novel, then they are likely “distrusted and hard to process” (which certainly describes well many people’s reaction to things on LW/OB), and any additional novelty like that of vocabulary or formatting or style, is more likely to damage reception or perhaps push readers past some critical limit than if applied to some standard familiar boring thing like evolution, where due to sufficient familiarity, idiosyncratic or novel aspects will not damage reception but instead improve reception. Consider the different reactions to Nick Bostrom and Eliezer Yudkowsky, who write about many of the same exact ideas and problems—but no one put on Broadway plays or YouTube videos mocking Bostrom or accusing him of being a sinister billionaire’s tool in a plot against all that is good and just—while on the other hand, Hofstadter’s GEB is dearly beloved for its diversity of novel forms and expressions, even if it’s all directed toward exposition on pretty standard unshocking topics like Godel’s theorems or GOFAI.
This line of reasoning suggests a simple strategy for writing: the novelty of a story or essay’s content should be inverse to the novelty of its form.
If one has highly novel, perhaps even outright frightening ideas, about the true nature of the multiverse or the future of humanity, the format should be as standard and dry as possible. Conversely, if one is discussing settled science like genetics, one should spice it up with little parables, stories, unexpected topics and applications, etc.
Does this predict success of existing writings? Well, let’s take Eliezer as an example, since he has a very particular style of writing. Three of his longest fictions so far are the Ultra Mega Crossover, “Three Worlds Collide”, and MoR. Keeping in mind that the former were targeted at OB and the last at a general audience on FF.net, they seem to fit well: the Crossover was confusing in format, introduced many obscure characters or allusions, in service of a computationally-oriented multiverse that only really made sense if you had already read Permutation City, and so is highly novel in form & content, so naturally no one ever mentions it or recommends it to other people; “Three Worlds Collide” took a standard SF opera short-story style with stock archetypes like “the Captain”, and saved its novelty for its meta-ethical content and world-building, and accordingly, I see it linked and discussed both on LW and off; MoR, as fanfiction, adapts a world wholesale, reducing its novelty considerably for millions of people, and inside this almost-”boring” framework introduces its audience to a panoply of cognitive biases, transhuman tropes like anti-deathism, existential risks, the scientific method, Bayesian-style reasoning, etc, and MoR has been tremendously successful on and off LW (I saw someone recommend it yesterday on HN).
Of course this is just 3 examples, but it does match the vibe I get reading why people dislike Eliezer or LW: they seem to have little trouble with his casual informal style when it’s being applied to topics like cognitive biases or evolution where the topic is familiar to relatively large numbers of people, but then are horribly put off by the same style or novel forms when applied to obscurer topics like subjective Bayesianism (like the Bayesian Conspiracy short stories—actually, especially the Conspiracy-verse stories) or cryonics. Of course, I suppose this could just reflect that more popular topics tend to be less controversial and what I’m actually noticing is people disliking marginal minority theories, but things like global warming are quite controversial and I suspect Eliezer blogging about global warming would not trigger the same reaction as to, say, his “you’re a bad parent if you don’t sign kids up for cryonics” post that a lot of people hate.
Have I seen this “golden mean” effect in my own writing? I’m not sure. Unfortunately, my stuff seems to generally adopt a vaguely academic format or tone in proportion to how mainstream a topic is, and a great deal of traffic is driven by interest in the topic and not my work specifically; so for example, my Silk Road page is not in any particularly boring format but interest in the topic is too high for that to matter either way. It’s certainly something for me to keep in mind, though, when I write about stranger topics.
EDIT: put links at https://www.gwern.net/docs/psychology/novelty/index
Speaking of Schmidhuber, he serves as a good example: he spends weirdness points like they’re Venezuelan bolivars. Despite him and his lab laying more of the groundwork for the deep learning revolution than perhaps anyone and being right about many things decades before everyone else, he is probably the single most disliked researcher in DL. Not only is he not unfathomably rich or in charge of a giant lab like DeepMind, he is the only DL/RL researcher I know of who regularly gets articles in major media outlets written in large part about how he has alienated people: eg https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html or https://www.bloomberg.com/news/features/2018-05-15/google-amazon-and-facebook-owe-j-rgen-schmidhuber-a-fortune And this is solely because of his personal choices and conduct. It’s difficult to think of an example of a technologist inventing so much important stuff and then missing out on the gains because of being so entirely unnecessarily unpleasant and hard to bear (William Shockley and the Traitorous Eight come to mind as an example; maybe David Chaum & Digicash too).
(From the standard errors & shuffled results, the decline in revenue from 0.8 to 1.0 happens very fast, so one probably wants to undershoot novelty and avoid the catastrophic risk of overshoot.)
“The Shazam Effect: Record companies are tracking download and search data to predict which new songs will be hits. This has been good for business—but is it bad for music?”
Speaking of Billboard: “What Makes Popular Culture Popular? Product Features and Optimal Differentiation in Music” Askin & Mauskapf 2017:
Brewer 1991, [“The Social Self: On Being the Same and Different at the Same Time”](http://web.mit.edu/curhan/www/docs/Articles/15341_Readings/Intergroup_Conflict/Brewer_1991_The_social_self.pdf)
Chan, Berger, and Van Boven 2012, [“Identifiable but Not Identical: Combining Social Identity and Uniqueness Motives in Choice”](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.462.8627&rep=rep1&type=pdf)
Goldberg et al 2016, [“What Does It Mean to Span Cultural Boundaries: Variety and Atypicality in Cultural Consumption”](http://dro.dur.ac.uk/16001/1/16001.pdf)
Hsu 2006, [“Jacks of All Trades and Masters of None: Audiences’ Reactions to Spanning Genres in Feature Film Production”](https://cloudfront.escholarship.org/dist/prd/content/qt5p81r333/qt5p81r333.pdf)
Kaufman 2004, [“Endogenous Explanation in the Sociology of Culture”](https://sci-hub.tw/http://www.annualreviews.org/doi/abs/10.1146/annurev.soc.30.012703.110608)
Lieberson 2000, _A Matter of Taste: How Names, Fashions, and Culture Change_
Lounsbury & Glynn 2001, [“Cultural Entrepreneurship: Stories, Legitimacy, and the Acquisition of Resources”](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.199.3680&rep=rep1&type=pdf)
Uzzi et al 2013, [“Atypical Combinations and Scientific Impact”](https://pdfs.semanticscholar.org/488a/f28ee062c99330f4277d59ba886b4c065084.pdf)
Zuckerman 1999, [“The Categorical Imperative: Securities Analysts and the Illegitimacy Discount”](https://www.dropbox.com/s/50k36a9j9lwyl8e/1999-zuckerman.pdf?dl=0)
Zuckerman 2016, [“Optimal Distinctiveness Revisited: An Integrative Framework for Understanding the Balance between Differentiation and Conformity in Individual and Organizational Identities”](https://books.google.com/books?id=PVn0DAAAQBAJ&lpg=PA183&ots=v8QKB6HRXZ&lr&pg=PA183#v=onepage&q&f=false)
Useful enough to be a discussion post.
Some more discussion:
“You have a set amount of “weirdness points”. Spend them wisely.”
Idiosyncrasy credit
(But the cited research in the Examples section seem weak, and social psychology isn’t the most reliable area of psychology in the first place.)
Bryan Caplan, “A Non-Conformist’s Guide to Success in a Conformist World”:
Were they a LW user? Every once in a while I’ll be surprised when someone links a LW article, only to see that it’s loup-valliant.
I don’t remember. It might’ve been a LW user.
Some anecdotal discussion of the dislike of (too much) creativity:
http://www.slate.com/articles/health_and_science/science/2013/12/creativity_is_rejected_teachers_and_bosses_don_t_value_out_of_the_box_thinking.html
https://news.ycombinator.com/item?id=6861533
Early example: “The Creative Personality and the Ideal Pupil”, Torrance 1969.
See also Schank’s Law.
Katja offers 8 models of weirdness budgets in “The economy of weirdness”; #1 seems to fit best the psychology and other research.
Why is that so? The end of the world is a strong element in major religions and is a popular theme in literature and movies. The global warming meme made the idea that human activity can have significant planet-wide consequences be universally accepted.
Existential risk due to astronomical or technological causes, as opposed to divine intervention, is pretty novel. No one thinks global warming will end humanity.
If you’re well familiar with the idea of the world ending, the precise mechanism doesn’t seem to be that important.
I think what’s novel is the idea that humans can meaningfully affect that existential risk. However that’s a lower bar / closer jump than the novelty of the whole idea of existential risk.
“If you’re familiar with the idea of Christians being resurrected on Judgment Day, the precise mechanism of cryonics doesn’t seem to be that important.”
“If you’re familiar with the idea of angels, the precise mechanism of airplanes doesn’t seem to be that important.”
For the purpose of figuring out whether an idea is so novel that people have trouble comprehending it, yes, familiarity with the concept of resurrection is useful.
People are familiar with birds and bats. And yes, the existence of those was a major factor in accepting the possibility of heavier-than-air flight and trying to develop various flying contraptions.
Awesome job, whoever made this “latest open thread,” “latest rationality diary,” and “latest rationality quote” thing happen!
Brought to you by Lucas Sloan.
But what’s the ‘Karma Awards’?
How are these triggered? Automagically or someone updating the link by hand?
The “latest” rationality diary isn’t the most recent one (July 15-31), for whatever reason.
Edit: It’s been fixed now.
I tried adding the group_rationality_diary tag to it, but I don’t know how/if/when these things reload.
Needs the tag group_rationality_diary, they reload every time there’s a new comment or every 12 hours.
Where did the “Top Contributors—All Time” go?
They will be on the about page shortly.
Agreed. I am glad to see those links.
Some #lesswrong regulars who are currently learning to code have made a channel for that purpose on freenode - #lw-prog
Anyone who is looking for a place to learn some programming alongside fellow lesswrongers is welcome to join.
Thanks for the heads up.
One of the most salient differences between groups that succeed and groups that fail is the group members’ ability to work well with one another.
A corollary: If you want a group to fail, undermine its members’ ability to work with each other. This was observed and practiced by intelligence agencies in Turing’s day, and well before then.
Better yet: Get them to undermine it themselves.
By using the zero-sum conversion trick, we can ask ourselves: What ideas do I possess that the Devil¹ approves of me possessing because they undermine my ability to accomplish my goals?
¹ “The Devil” is shorthand for a purely notional opponent whose values are the opposite of mine.
One Devil’s tool against cooperation is reminding people that cooperation is cultish, and if they cooperate, they are sheep.
But there is a big exception! If you work for a corporation, then you are expected to be a team player, and you have to participate in various team-building activities, which are like cult activities, just a bit less effective. You are expected to be a sheep, if you are asked to be one, and to enjoy it. -- It’s just somehow wrong to use the same winning strategy outside the corporation, for yourself or your friends.
So we get the interesting result that most people are willing to cooperate if it is for someone else’s benefit, but have an aversion against cooperation for their own. If I tried to brainwash people to become obedient masses, I would be proud to achieve this.
This said, I am not sure what exactly caused this. It could be a natural result of thousand small-scale interactions; people winning locally by undermining their nearest competitors’ agency, and losing globally by poluting the common meme-space. And the people who overcome this and become able to optimize for their own benefit probably find it much easier to find themselves followers than peers; thus they get out of the system, but don’t change the system.
Can you give an example of how people resist cooperation? I’m having difficulty identifying such a trend in my past interactions.
P.S. It seems I accidentally double-posted. Sorry about that.
The first example in my mind when I wrote that were the negative reactions about “rationalist rituals” (some comments were deleted). An alternative explanation is that it was mostly trolling.
At the recent LW meetup I organized, I tried to start the topic of becoming stronger: where would we individually want to become stronger, and how we could help each other with some specific goals. The whole topic was sabotaged (other sources later confirmed it was done intentionally) and turned to idle chatting by a participant, who happens to be a manager in a corporation. An alternative explanation is that the specific person simply has an aversion to the specific topic.
A few times happened to me that when I approached people with “we could do this as a group together”, I was refused, but when I said “I want to do this, and I need you to do this”, people complied. (Once it was about compiling a DVD with information from different sources; second time about making a computer application.) People are more willing to obey than to cooperate as equals, perhaps because this is what they are taught. Most likely, in other situation I react the same way. An alternative explanation is that people don’t want to be responsible for coordination, motivating others, etc.
I know a few people with hobbies that could be used together to make something greater. For example: writing stories + drawing pictures = making an illustrated story book. When I tried to contact them together, they refused (without seeing each other). Based on the previous experiences, I suspect that if I inserted myself as the boss, and told each person “I want to do this, and I need you to this”, they would be more likely to agree, although I am otherwise not needed in the process.
Uhm, perhaps other people can add more convincing examples?
Can you give an example of how people resist cooperation? I’m having difficulty identifying such a trend in my past interactions.
Source?
Enigma comes to mind. IIRC, to camouflage it, the Brits specifically leaked messages claiming that it was due to some moles in Germany, not just explaining away how data kept leaking but actively impeding German operations. This was also seen in the Cold War where you had Soviet defectors who tried to discredit each other as agents sent to throw the CIA into confusion, and I’ve seen accusations that James Jesus Angleton was a spy or otherwise manipulated into his endless mole hunts by Russia specifically to destroy all agency effectiveness. For a more recent example, Assange’s Wikileaks was based on this theory, which he put forth in a short paper around that time: enabling easy leaking would sow distrust and dissension in networks that depended on secrecy, forcing compartmentalization and degrading efficiency compared to more ‘open’ organizations. EDIT: and appropriately, this is exactly what is happening in the NSA now—they are claiming that Snowden was leaking materials which had been made available to much of NSA, to assist in coordination, and they are locking down the material, adding more logging, and restricting sysadmins’ accesses, none of which is going to make the NSA more efficient than before… Similar to how State etc had to lock down and add friction to internal processes after Manning.
I don’t know if the tactic has any name or handy references, but certainly intelligence agencies are aware of the value of witch hunts and internal dissension.
The Assange paper in question: State and Terrorist Conspiracies. Written considerably prior to Wikileaks entering the spotlight (dated 2006 in that PDF).
Various leaks from Anonymous indicate the FBI (and probably local LEA) uses similar tactics against Occupy and other groups.
Me and my friend are organizing a new meetup in Zagreb but I don’t have enough karma to make an announcement here. Thanks!
[Meta] Most meetup threads have no comments. It seems like it would be useful for people to post to say “I’m coming”, both for the organiser and for other people to judge the size of the group. Would this be a good social norm to cultivate? I worry slightly that it would annoy people who follow the recent comments feed, but I can’t offhand think of other downsides.
Suggested alternative to reduce the recent comment clutter issue: Have a poll attached to each meetup with people saying if they are coming. Then people can get a quick glance at how many people are probably coming, and if one wants to specifically note it (say one isn’t a regular) then mention that in the comment thread.
Many meetup attendees don’t have LW accounts, so it may not be a very good measure.
and even the ones who do will likely not bother to vote every single week for regular meetups.
this is what I found when i tried to use facebook: many of the people who go to meetups who even have facebook accounts don’t bother responding.
Another suggestion is to set up something that e-mails past attendees with a quick poll of whether they are coming to the next meetup (1 extra per week is likely worth it), and there is an updating thingy in the LW post that shows accepted/tenative/declined vs total number on the list and time to next meetup.
I don’t know which parts of this would be difficult to implement, but it (working with the final product, not necessarily setting it up) is easier than having people answer an LW poll given the complications posted in other comments below.
If you’re missing a lot of flights, you should arrive at the airport sooner.
Here is some verse about steelmanning I wrote to the tune of Keelhauled. Compliments, complaints, and improvements are welcome.
*dun-dun-dun-dun
Steelman that shoddy argument
Mend its faults so they can’t be seen
Help that bastard make more sense
A reformulation to see what they mean
To whomever downvoted parent: Please don’t downvote methods for providing epistemic rationality techniques with better mental handles so they actually get used. Different tricks are useful for different people.
Alestorm are a very rationalist band. I particularly like the lyrics:
You put your faith in Odin and Thor, We put ours in cannons and whores!
Its about how a religious society can never achieve what technology can.
Being in Seattle has taught me something I never would have thought of otherwise:
Working in a room with a magnificent view has a positive effect on my productivity.
Is this true for other people, as well? I normally favor ground-level apartments and small villages, but if the multiplier is as consistent as it’s been this past week, I may have to rethink my long-term plans.
It could be just the novelty of such a view. I suspect that any interesting modification to your working environment leads to a short-term productivity boost, but these things don’t necessarily persist in the long term. In any case, it seems like the VoI of exploring different working environments is high.
The under-utilized conference room with a great view has become the unofficial thinking room at work.
There is a whole list of little factors that contribute to the success of the thinking room, but major contributors include the both view, and the novelty.
I dunno—on one hand, I’d be more tempted to slack off by looking outside; on the other hand, it’d be easier for me to recharge my willpower, by looking outside. I think the former would be a larger effect for me, but I’m not sure.
I dunno—on one hand, I’d be more tempted to slack off by looking outside; on the other hand, it’d be easier for me to recharge my willpower, by looking outside. I think the former would be a larger effect for me, but I’m not sure.
Question: Who coined the term “steelman” or “steelmanning”, and when?
I was surprised not to find it in the wiki, but the term is gaining currency outside LessWrong.
Also, I’d be surprised if the concept were new. Are there past names for it? Principle of charity is pretty close, but not as extreme.
Google search with a date restriction and a few other tricks to filter out late comments on earlier blog posts suggests Luke’s post Better disagreement as the first online reference, though the first widely linked reference is quite recent, from the Well Spent Journey blog.
Yes, but Luke refers to it as a term already in use.
But apparently not anywhere online accessible to search robots.
Saw this on twitter. Hilarious: “Ballad of Big Yud”
http://www.youtube.com/watch?v=nXARrMadTKk
There is another video from the same author explaining his opinions on LW. It takes 2 minutes to just start talking about LW, so here are the important parts: ---
The Sequences are hundreds and hundreds of blog posts, written by one man. They are like catechism, teach strange vocabulary like “winning”, “paying rent”, “mindkilling”, “being Bayesian”.
The claim that Bayes theorem, which is just a footnote in statistic textbook, has the power to reshape your thinking so that you can maximize the outcomes of your life… has no evidence. You can’t simplify the complexity of life into simple probabilities. EY is a high-school dropout and he has no peer-reviewed articles.
People on LW say that criticism of LW is upvoted. Actually, that “criticism” does not disagree with anything—it just asks MIRI to be more specific. Is that the LW’s best defense against accusations of cultishness?
LW community believes in Singularity, which again, has no evidence, and the scientific community does not support it. MIRI asks your money, and does not say how specifically it will be used to save the world.
LW claims that politics is the mindkiller, yet EY admits that he is libertarian. Most of MIRI money comes from Peter Theil—a right-wing libertarian billionaire.
Roko’s basilisk...
...and these guys pretend to be skeptics?
Now let’s look at CFAR. They have EY on their board, and they force you to read the Sequences if you want to join them.
Julia Galef is a rising star in the skeptical movement; she has a podcast “Rationally Speaking”. But she is connected with LW, she believes in Bayes theorem, and she only criticizes the political left. She is obviously used as a face of LW movement because she is pretty! -- This is a sexism on LW’s part, because men at LW agree in comments that Julia is pretty. If they weren’t sexist, they would talk about how smart she is.
People like this are not skeptics and should not be invited to Skepticon!
There’s a user at RationalWiki, one of the dedicated LW critics there, called “Baloney Detection”. I often wondered who it was. The image at 5:45 in this video, and the fact that “Baloney Detection” also edited the “Julia Galef” page at RW to decry her association with LW, tells me this is him…
By the way, the RW article about LW now seems more… rational… than the last time I checked. (Possibly because our hordes of cultists sposored by the right-wing extremist conspiracy fixed it, hoping to receive the promised 3^^^3 robotic virgins in singularitarian paradise as a reward.) You can’t say the same thing about the talk pages, though.
It’s strange. Now I should probably update towards “a criticism of LW found online probably somehow comes from two or three people on RW”. On their talk pages, Aris Katsaris sounds like a lonely sane voice in a desert of… I guess it’s supposed to be a “rationality with a snarky point of view”; which works like this—I can say anything, and if you prove me lying, I say I was exaggerating to make it more funny.
Some interesting bits from the (mostly boring) talk page:
A proper skeptical argument about why “Torture vs Dust Specks” is wrong.
This is why LW people care about Löb’s Theorem, in case you (LW cultists not belonging to the inner circle) didn’t know.
An ad-hoc explanation is being prepared. Criticising Eliezer for being a high school dropout and never publishing in peer-reviewed journal is so much fun… but if he would some day publish in a peer-reviewed journal and get citations or whatever recognition by the scientific establishment, RationalWiki already knows the true explanation—the right-wing conspiracy bribed the scientists. (If the day comes that RW starts criticizing scientists for supporting LW, I will be laughing and munching popcorn.)
How do you know what you know? Specifically, where are those data about who upvoted and downvoted Holden coming from? (Or it is an alternative explanation-away? LW does not accept criticism and censors everything, but this one time the power of the popular opinion prevented them from deleting it.)
And finally a good idea:
I vote yes.
The article was improved ’cos AD (a RW regular who doesn’t care about LW) rewrote it.
It was disappointing to see Holden’s posts get any down votes.
I agree, but we are speaking about approximately 13 downvotes from 265 total votes. So we have at least 13 people on LessWrong who oppose a high-quality criticism.
The speculation about regulars downvoting and non-regulars upvoting is without any evidence; could have also been the other way round. We also had a few trolls and crazy people here in the past. And by the way, it’s not like people from RationalWiki couldn’t create throw-away accounts here. So, with the same zero evidence, I could propose an alternative hypothesis that Holden was actually downvoted by people from RW who smartly realized that his “criticism” of LW is actually no criticism. But that would just be silly.
Or there are approximately 13 people who believe the post is worth a mere 250 votes, not 265 and so used their vote to push it in the desired direction. Votes needn’t be made or considered to be made independently of each other.
One data point: I used to do that kind of things before the “% positive” thing was implemented, but I no longer do that, at least not deliberately.
I am pleasantly surprised that they didn’t get overwhelmed by the one or two LW trolls that swamped them a couple months back.
Looking through the talk pages, it seems those guys partially ran out of steam, which let cooler heads prevail.
My own thoughts:
I wonder how much “hundreds of blog posts written by one man” is the true rejection. I mean, would the reaction be different if it was a book instead of hundred blog posts? Would it be different if the Sequences were on a website separate from LessWrong? -- The intuition is that a “website by one man” would seem more natural than a “website mostly by one man”. Because people do have their personal blogs, and it’s not controversial. Even if the personal blog gets hundreds of comments, it still feels like a personal blog, not like a movement.
(Note: I am not recommending any change here. Just thinking loudly whether there is something about the format of the website that provokes people, or whether it is mere “I dislike you, therefore I dislike anything you do”.)
Having peer-reviewed articles (not just conference papers) or otherwise being connected with the scientific establishment would obviously be a good argument. I’m not saying it should be high priority for Eliezer, but if there is a PR department in MIRI/CFAR, it should be a priority for them. (Actually, I can imagine some CFAR ideas published in a pedagogical journal—that also counts as official science, and could be easier.)
The cultish stuff is the typical “did you stop beating your wife?” pattern. Anything you respond… is exactly what a cult would do. (Because being cultish is an evidence for being a cult, but not being cultish is also an evidence for being a cult, because cults try to appear not cultish. And by the way, using the word “evidence” is an evidence of being a brainwashed LW follower.)
What is the relation between politics and skepticism? I mean, do all skeptics have to be perfectly politically neutral? Or is left-wing politics compatible with skepticism and only the right-wing politics is incompatible? (I am not sure which of these was the author’s opinion.) How about things like “Atheism Plus”? And here is a horrible thought… if some research would show there is a non-zero corelation between atheism and a position on a political spectrum, would it mean that atheists are also forbidden from skeptical movement?
I appreciate the spin of saying that Julia is just a pretty face, and then suddenly attributing this opinion to LW. I mean, that’s a nice Dark Arts move—say something offensive, and then pretend it was actually your opponent who believes that, not you. (The author is mysteriously silent about his own opinion. Does he believe that Julia is not smart? Or does he believe that she is smart, but that it is completely accidental to the fact that she represents LW on Skepticon? Either choice would be very suspicious, so he just does not specify it. And turns off the comments on youtube, so we cannot ask.)
If it was a book, it’d be twice the size of Lord Of The Rings.
The only point I feel the need to contest is “EY admits he is libertarian”. What I remember is EY admitting that he was previously libertarian, then stopped.
Well, and “EY is a high school dropout with no peer reviewed articles”, not because it’s untrue, but because neither of those is all that important.
The rest is sound criticism, so far as I can tell.
Here is a comment (from 2007) about it:
It could be interpreted as Eliezer no longer being libertarian, but also as Eliezer remaining libertarian, just moving more meta and focusing on more winnable topics.
Sure, but why does it feel (I mean, at least to the author) as important? I guess it is heuristics “if you are not a scientist, and you speak a lot about science, you got it wrong”. Which may be generally correct, if people obsessed with science usually become either scientists or pseudoscientists.
The part about Julia didn’t sound fair to me—but perhaps you should see the original, not my interpretation. It starts at 8:50.
Otherwise, yes, he has some good points, he is just very selective about the evidence he considers. I was most impressed by the part about Holden’s non-criticism. (More meta, I wonder how he would interpret this agreement with his criticism. Possibly as something unimportant, or something that a cult would do to try appear non-cultish.)
In 2011, he describes himself as a “a very small-‘l’ libertarian” in this essay at Cato Unbound.
I think what this is really saying is that Galef is socially popular especially among skeptics (she has a popular blog, co-hosts multiple podcasts, and all that), but she’s not necessarily smarter, or even more involved in LW activities (presumably, MIRI/CFAR has a reputation of very smart folks being involved, hence the confusion), compared to many other LW folks, e.g. Eliezer, etc. So, the argument goes, it’s not really clear why she should get to be the public face of LW, but it’s certainly convenient in that, again, LW is made to look less like a cult than it really is.
I hope I am not mistaken about this, but it seems to me that MIRI and CFAR were separated because the former focuses on “Friendly AI” and the latter on “raising the sanity waterline”. It’s not just a difference in topic, but the topic also determines tools and strategy. -- To research Friendly AI, you need to find good mathematicians, develop a mathematical theory, convince AI researchers about its seriousness, publish in peer-reviewed journals, and ultimately develop the machine. To raise the sanity waterline, you need to find good teachers, develop a curriculum, educate people, and measure the impact. -- Obviously, Eliezer cares mostly about the former, and I believe even the author of the video would agree with that.
So, pretty likely, Eliezer is not the most involved person in CFAR. I don’t know about internal stuff of CFAR to say precisely who is that person. Perhaps there are many people contributing significantly in ways that can’t be directly compared; is it more important to research the curriculum, write the textbooks, test the curriculum, connect people, or keep everything running smoothly? Maybe it’s not Julia, but that doesn’t mean it’s Eliezer.
I guess CFAR could also send Anna Salamon, Michael Smith, Andrew Critch, or anyone else from their team to Skepticon. Would that be better? Or unless it is Eliezer personally, will it is always seem like the dark overlord Eliezer is hiding behind someone else’s face? (Actually, I wouldn’t mind if Eliezer goes to Skepticon, if he would think this is the best way to use his time.) How about all of them going to Skepticon together—would that be acceptable? Or is it: anyone but Julia?
By the way, I really liked Julia’s Straw Vulcan lecture, and sent a few people a hyperlink. So she has some interesting things to say, too. And those things are completely relevant to CFAR goals.
Chorus … We should help him read the sequences … shambles forward
The anti-LW’ers have become quite the community themselves, the video is referencing XiXiDu and others.
It’s thoroughly entertaining, the music especially.
Edit: I must say I found this statement by the video’s author illuminating indeed in regards to his strong discounting of Bayesian reasoning:
To his benefit, Dmytry explained it to him, and now all is well again.
Could I get some career advice?
I’d like to work in software. I can graduate next year with a math degree and look for work, or I can study for additional CS-specific credentials, (two or three extra years for a Master’s degree).
On the one hand, I’m told online that programming is unusually meritocratic, and that formal education and credentials matter very little if you can learn and demonstrate competency in other ways, like writing your own software or contributing to open-source projects.
On the other hand, mid-career professionals in other fields (mostly engineering), have told me that education credentials are an inevitable filter for raises, hiring, layoffs, and just getting interesting work. They say that getting a graduate degree will be worthwhile even if I could have learned equally valuable skills by other means.
I think I would enjoy and do well in graduate school, but if it makes little career difference, I don’t think I would go. I’m skeptical that marginal credentials are unimportant, (or will remain unimportant in ten years), but I don’t know any programmers in person who I could ask.
Any thoughts or experiences here?
What programming have you done so far? Have you worked on any open-source projects? Run your own web site?
I know a lot of people with math degrees working in software engineering or site reliability in Silicon Valley. So it’s definitely possible … but you have to have the skills.
So tell me about your skills. :)
In school, some of my math courses have been programming intensive, (bioinformatics and statistics, all sorts of numerical methods and optimization courses). I’ve taken most of the CS curriculum as well, but scheduling the remaining class (a senior project) for a double major would take an extra year.
On my own, I’ve written a couple android apps, mostly video games. But that’s about it. No websites and no open-source work.
I have a BS in computer science. I worked at Google for four years. I would guess that your credentials—with a BS in math—would be no bar to getting a programming job. I would focus on direct programming experience instead of further credentialling. Graduate degrees in computer science are generally not required, and not necessarily even useful, for programming jobs in industry. Masters degrees in computer science are especially suspect, because they are often less rigorous than undergraduate degrees in the field. This is especially true of coursework (non-research-oriented) masters degrees.
What type of work in software would you like to do? The rest of my comment will assume that you mean the software technology industry, and not programming specifically.
There are many individual contributor roles in technology companies. Being a developer is one of them. Others may include field deployment specialists, system administrator, pre-sales engineers, sales or the now popular “data scientist”.
I agree that credentials help with hiring and promotions. When I evaluate staff with little work experience graduate credentials play a role in my evaluation.
If you could have learned equally valuable skills by other means, then the graduate degree almost always comes out on top due to signalling/credentialing factor. However, usually this isn’t the case. Usually the graduate degree is framed as a trade-off between the actual signalling factor, coursework, research and graduate institution vs. work experience directly relevant to your particular domain of expertise. There are newer alternative graduate degrees programs that may be more useful to you with your strong undergraduate mathematics base such as Masters of Financial Engineering*, Masters in Data Science that offer a different route to obtaining an interesting job in the software industry without necessarily going through a more “traditional” CS graduate program.
If you are dead set on being a programmer for the next 10 years, please consider why. The reason I bring this up is because some college seniors I’ve talked to can clearly visualize working as a developer, but find it harder to visualize what it’s like doing other jobs in the technology industry, or worse have uninformed and incorrect stereotypes of the types of work involved with different roles (canonical example are technology sales roles, where anybody technical seems to have a distaste for salespeople).
It you are still firmly aiming to be a developer, it may help to narrow down what type of programming you like to do, such as web, embedded, systems, tooling, etc., and also spend a bit of time at least trying to imagine companies you’d like to work for evaluated on different dimensions (e.g. industry, departmental function, Fortune 500, billing/security/telco infrastructure/mobile, etc.).
One additional point to consider is why not do both by working full-time and immediately embarking on a part-time graduate degree? Granted, some graduate degrees (e.g. certain institutions or program structure) don’t allow for part-time enrollment, but it’s at least something to consider. That way you cover both bases.
* Google MFE or “Masters Financial Engineering”—many US programs have sprung up over the past several years
EDIT: I apologize in advance for the US-centric links in case you are outside of N. America.
I’ve recently noticed a new variant of failure mode in political discussions. It seems to be most common on political discussions where one already has almost all Blues or all Greens. It goes like this:
Blue 1: “Hey look at this silly thing said by random silly Green. See this website here.”
Blue 2, Blue 3… up to Blue n: “Haha! What evil idiots.”
Blue n+1 (or possibly Blue sympathizer or outright interloper or maybe even a Red or a Yellow): “Um, the initial link given by Blue 1 is a parody. That website does satire.”
Large subset of Blue 2 through Blue n: “Wow, the fact that we can’t tell that’s a parody shows how ridiculous the Greens are.”
Now at this point, the actual failure of rationality happened with Blues not Greens. But somehow Blues will then count this as further evidence against Greens. Is there any way to politely get Blues to understand the failure mode that has occurred in this context?
This isn’t entirely a fallacy: if you can’t tell a signal from random noise, either you’re bad at seeing signals or there’s not a whole lot of information in that signal.
Maybe presenting it in that format? “It’s possible the Greens really are that stupid, but alternatively it’s possible that you just missed a perfectly readable signal?”
Another failure mode I noticed is that of a particularly rational Blue noticing that his fellow Blues frequently exhibit failure mode X and concluding that the same is true of Greens.
What with the popularity of rationalist!fanfiction, I feel like there’s an irresistible opportunity for anyone familar with The Animorphs books.
Imagine it! A book series where sentient slugs control people’s bodies, yet can communicate with their hosts. To borrow from the AI Box experiments, the Yeerks are the Gatekeepers, and the Controlled humans are the AI’s! One could use the resident black-sheep character David Hunting as the rationalist! character, who was introduced in the middle of the series, removed three books later and didn’t really do anything important. I couldn’t write such a thing, but it would be wicked if someone else did.
Relevant /r/hpmor link: http://www.reddit.com/r/HPMOR/comments/1hokeq/ideas_for_a_hpmor_sequel_spoilers_1_92/cawbw1i
I’ve run into a roadblock on the Less Wrong Study Hall reprogramming project. I’ve been writing against Google Hangouts, but it seems that there’s no way to have a permanent, public hangout URL that also runs a specified application. (that is, I can get a fixed URL, or a hangout that runs an app for all users, but I can’t do both)
Any of the programmers here know a way around that? At the moment it’s looking like I’ll have to go back to square zero and find an entirely different approach.
Could you have a server that knows where the dynamic url is at all times, and provides a redirect? So I’d hit up
lwsh.me
and it would redirect me tohttps://plus.google.com/hangouts/_/etc
…that would create an effectively permanent url, even though the hangout itself would change urls.Looking at the Hangouts API, it appears that when the app is initialized you could call
getHangoutUrl()
and then pipe it back to the server. This could probably be used in a pretty dynamic manner too… like whenever anyone uses the app, it connects with the main server and adds that chat to the list of active chats...To get a permanent URL, the workaround was that you could schedule a hangout very far in the future. Are you saying that you can’t run a specified application on that?
A qualified “yes, exactly”: I haven’t found a way to do it, which is different from saying a way doesn’t exist.
I’m not sure what you mean by “runs an app for all users”, Are you writing a separate app that you want the hangout to automatically open on entry? Doesn’t it make more sense to do this the other way around?
The app runs within Google Hangouts (like drive, chat, youtube, effects) which is part of the draw of using that platform.
Of course it does, but reality in this case does not appear to make sense. :-(
Adding apps to permanent Google Hangouts works for me—shouldn’t we revisit this option?
Possibly. I know it used to be possible and the capability was lost in a change, so maybe they changed it back while I wasn’t looking. I also got a PM recently noting that lightirc supports webcams; that might be an even better option since it would give us server control.
I’m busy being sick right now, but I’ll take a new look at things once I’m functional again.
What are good sources for “rational” (or at least not actively harmful) advice on relationships?
What sort of relationships? Business? Romantic? Domestic? Shared hobby?
The undercurrent that runs along good advice for most is “make your presence a pleasant influence in the other person’s life.” (This is good advice for only some business relationships.)
If you know of a reference of similar quality to the one I mention here but for platonic relationships, I would appreciate the referral. The book that I mentioned touches on such, but I think it intends to somewhat focus on romance.
I don’t, but I do appreciate your referral of that book.
I was implicitly referring to romantic ones. I imagine a lot of the advice would overlap, but the quality of advice for those is particularly bad.
The Captain Awkward advice blog. They’re not currently taking questions but the archives cover lots of material, and I found just reading the various responses on many different problems, even ones that were in no way similar to mine, allowed me to approach my issues from a new perspective.
A book on “nonviolent communication” is also handy rationality advice.
Will and Divia talk about rational relationships.
Athol Kay for ev-psych aware long-term relationship advice. (Holy crap it works).
Seconding nonviolent communication
That guy’s stuff has been said to have a shitload of mistrust, manipulation and misogyny which poisons reasonable everyday advice about getting along.
Check out the comments there on how this overall attitude to relationships that he (and other stereotypical PUA writers) present can be so nasty, despite some grains of common sense that it contains. Seriously, would you enjoy playing the part of a cynical, paranoid control freak with a person whom you want to be your life partner?
Athol’s advice is useful, he does excellent work advising couples with very poor marriages. So far I have not encountered anything that is more unethical than any mainstream relationship advice. Indeed I think it less toxic than mainstream relationship advice.
As to misogyny, this is a bit awkward, I actually cite him as an example of a very much not woman hating red pill blogger. Call Roissy a misogynist, I will nod. Call Athol one and I will downgrade how bad misogyny is.
Is there evidence that he is more successful at this than the typical “Blue Pill” marriage counselor/relationship expert? Even better would be evidence that he is more successful than the top tier of Blue Pill experts. I realize these are hard things to measure, and I don’t expect to see scientific studies, but I’m wondering what you’re basing your claim of his excellence on. Is it just testimonials? Personal experience?
I guess nobody measured Athol’s counselling scientifically; we only have self-reports of people who say it helped them (on his web page), which is an obvious selection effect.
Maybe someone measured Blue Pill counselling. I would be curious about the results. For starters, whether it is better or worse than no counselling. (I don’t have any data on this, not even the positive self-reports, but that’s mostly a fact about my ignorance.)
Oh, he is not a misogynist, all right, I just said that he frames his stuff in language that’s widely used and abused by misogynists. Geeks can’t appreciate how important proper connotations are in all social matters! We’ve talked about that before! The comments I linked to say as much; that might be some decent advice, but why frame it like that?
He is reclaiming the language! (Half-seriously.)
Look, there are some unsympathetic people everywhere. “Red Pill” people have Roissy. Feminists had Solanas. Comparing these two, at least Roissy didn’t try to kill anyone, nor does he recommend killing, so let’s cut him some slack. The difference is that Roissy is popular now, Solanas is mostly forgotten. Well, ten years later maybe nobody will know about Roissy, if the more sane people become more popular than him and the ideas will enter the mainstream. Try to silence Athol Kay, and then all you have left are the Roissys. Because the idea is already out there and it’s not going to disappear; it fits many people’s experiences too well. (For example myself.)
Connotations of ideas are a matter of political power. If you have the power, you can create positive connotations for your keywords and negative connotations for your opponents’ keywords. You can make your ideas mainstream, and for many people mainstream equals good. Currently, feminism has the power, so it has the power to create the connotations. And it has the power to demonize its opponents. And you are exercising this power right now. (You take a boo word “misogynist” and associate it with someone, and you have a socially valid argumentum ad hominem. If I tried to do the same thing using the word “misandrist”, I wouldn’t get anywhere, because people are not conditioned about that word, so they would just laugh at that kind of argument.)
Someone else could try to tell the same advice, avoiding to use the sensitive words. Which means that for many words he would simply have to invent synonyms. Which would be academically dishonest, because it is a way to use someone’s research without giving them credit. But it would be technically possible. Maybe even successful. The question is whether other people would not connect the old words with the new words. Some words, like the “Red Pill” are not necessary. With some other words, the offensive part is the concept (for example that female attraction is predictable, and this is how specifically it works).
Fun fact: There is a RedPillWomen group on Reddit. Are those women misogynists too? (Here is a thread about hating women and their choices, here is a thread about feminism versus the Red Pill.)
No shit, Sherlock. Internalized sexism exists. Luckily, one lady who just wanted “traditional gender roles” in her relationship, and less of the fucked-in-the-headedness, has escaped that goddamn cesspool and reported her experience:
http://www.reddit.com/r/TheBluePill/comments/1hh5z5/changed_my_view/
Also:
http://www.reddit.com/r/TheBluePill/comments/1gapim/trp_why_i_actually_believed_this_shit_for_a_month/
I disagree that his outlook is toxic. He uses a realistic model of the people involved and recommends advice that would achieve what you want under that model. He repeatedly states that it is a mistake to make negative moral judgement of your partner just because they are predictable in certain ways. His advice is never about manipulation, instead being win-win improvements that your partner would also endorse if they were aware of all the details, and he suggests that they should be made aware of such details.
I see nothing to be outraged about, except that things didn’t turn out to actually be how we previously imagined it. In any case, that’s not his fault, and he does an admirable job of recommending ethical relationship advice in a world where people are actually physical machines that react in predictable ways to stimuli.
Drop the adjectives. I strive to be self-aware, and to act in the way that works best (in the sense of happiness, satisfaction, and all the other things we care about) for me and my wife, given my best model of the situation.
I do occasionally use his advice with my wife, and she is fully aware of it, and very much appreciates it when I do. We really don’t care what a bunch of naive leftists on the internet think of how we model and do things.
Someone asked for rational relationship advice, an IMO, Athol’s advice is right on the money for that. Keep your politics out of it please.
If this is the case, he is doing serious damage by associating with the “Red Pill” brand of misogynists and misanthropes. If he actually wants to further these stated objectives, he should drop this association pronto.
Serious damage to who? Idiots who fail to adopt his advice because he calls it a name that is associated with other (even good) ideas that other idiots happen to be attracted to? That’s a tragedy, of course, but it hardly seems pressing.
Seems to me that people should be able to judge ideas on their quality, not on which “team” is tangentially associated with them. Maybe that’s asking too much, though, and writers should just assume the readers are morally retarded, like you suggest.
You’re not familiar with the whole “Red Pill” meme cluster/subculture, I take it? It strongly promotes misanthropic attitudes which most people would consider morally wrong, and it selects for these attitudes in its adherents.
I’m somewhat familiar. My impression is that the steelman version of it is a blanket label for views that reject the controversial empirical and philosophical claims of the left-wing mainstream:
Everyone is cognitively equal across race and sex and such
Cognition and desire are not embodied in predictable biology
Blank slate atomic agent model of relationships and such
(Various conspiracy theories)
democracy is awesome
etc.
Pointing out that an idea has stupid people who believe it is not really a good argument against that idea. Hitler was a vegetarian and a eugenicist, but those ideas are still OK.
So?
Here’s why that’s true: “Red Pill” covers empirical revisionism of mainstream leftism. What kind of people do you expect to be attracted to such a label without considering which ideas are correct? I would expect bitter social outcasts, people who fail to ideologically conform, a few unapologetic intellectuals, and people who reject leftism for other reasons.
Then how are those people going to appear to someone who is “blue pilled” (ie reasonable mainstream progressive) for lack of a better word? They are going to appear like the enemy. The observer has been brought up with the assumption that anyone who disagrees on point X Y and Z are evil. Along comes a label that covers exactly disagreement with the mainstream on X Y and Z, so of course the people who identify with that label are going to appear evil.
Note that I’ve offered a plausible explanation for the existence of idiots and jerks in the red-pill cluster, and their appearance of evil without reference to the factual or moral accuracy of the “red-pill” claims. Your impressions are orthogonal to the facts.
Now of course, by the selection effect you mention and I explain, the “red pill” space is going to be actually filled with idiots and evil people, who will tend to influence things a lot. But I’m from 4chan, so I have the nasty habit of filtering out the background noise of idiots and evil to find the good stuff, and the “red-pill” space has a lot of good stuff in it, once you start ignoring the misogynists, conspiracy theorists, misanthropes, and antisocial idiots.
I’ve been reading a lot of red pill stuff lately (while currently remaining agnostic), and my impression is that most of the prominent “red pill” writers are in fact really nasty. They seem to revel in how offensive their beliefs are to the general public and crank it up to eleven just to cause a reaction. Roissy is an obvious example. About one third of his posts don’t even have any point, they’re just him ranting about how much he hates fat women. Moldbug bafflingly decides to call black people “Negroes” (while offering some weird historical justification for doing so). Regardless of the actual truth of the red pill movement’s literal beliefs, I think they bring most of their misanthropic, hateful reputation on themselves.
I haven’t read Athol Kay, so I don’t know what his deal is.
It’s not that baffling if you know where Moldbug’s ideas come from. Since he is effectively restating the ideas of Thomas Carlyle and other 19th century conservatives (admittedly in modernized terms), it’s quite fitting in a way that he should lift some of their lexicon as well.
What is baffling to me is that it is ok to call black people black people. Both terms amount to labelling a race based on the same exaggerated description of a visible difference and in general requiring latin use is higher status than common English words. Prior to specific (foreign) cultural exposure I would expect “black people” to be an offensive label and so avoid it.
The euphemism treadmill is basically arbitrary most of the time. For example, “people of color” is very PC right now, but “colored people” is considered KKK-language. It is what it is.
Also black people is a kind of strange term. Pretty much all black people are okay with it, but a lot of white people are weirdly afraid of saying it, especially in formal settings.
Black is a useful term for referring to people of African descent who aren’t African-American, e.g. Caribbean-Americans.
“People of color” currently means anyone other than white people, not black people exclusively.
Really? That is even more surprising to me.
My experience is it is the prefered term of the Social Justice Crowd on Tumblr and other websites for non-white people.
Language can be pretty arbitrary. It’s not as though science fiction reliably has any science in it, even fake science.
Isn’t a similar dynamic involved anywhere where people are developing an idea that offensively contradicts the belief of a majority?
We could similarly ask why are some atheists so agressive, and whether it wouldn’t be better for others to avoid using the “atheist” label to avoid the association with these people, otherwise they deserve all the religious backlash.
There are two strategies to become widely popular: say exactly the mainstream thing, or say the most shocking thing. The former strategy cannot be used if you want to argue against the mainstream opinion. Therefore the most famous writers of non-mainstream opinions will be the shocking ones. Not because the idea is necessarily shocking, but because of a selection effect—if you have a non-mainstream idea and you are not shocking, you will not become popular worldwide.
I may sometimes disagree with how Richard Dawkins chooses his words, but avoiding the succesful “atheist” label would be a losing strategy. I disagree with a lot of what Roissy says, but “red pill” is a successful meme, and he is not the only one using it.
There are words which have both positive and negative connotations to different people. To insist that the negative connotation is the true one often simply means that the person dislikes the idea (otherwise they would be more likely to insist that the positive connotation is the true one).
This looks like begging the question to me. Whether an idea offensively contradicts mainstream beliefs has a lot to do with the connotations that happen to be associated with it. Lots of reasonably popular ideas contradict mainstream beliefs, but are not especially offensive. Obviously, once an idea becomes popular enough to be part of the mainstream, this whole distinction no longer makes sense.
Indeed, this explains why many non-theistic people steadfastly refuse to self-identify as atheists (some of them may call themselves agnostics or non-believers). It also partially explains why the movements “Atheism Plus” and “Atheism 2.0″ have started gaining currency.
Similarly, any useful and non-offensive content of “red pill” beliefs may be easily found and developed under other labels, such as “seduction community”, “game”/”PUA”, “ev psych” and the like.
It’s not clear why we should care whether a writer of non-mainstream opinions is famous especially when such fame correlates poorly with truth-seeking and/or the opinions are gratuitously made socially unpopular for the sake of “controversy”.
Serious question, name a positive connotation of “The Red Pill”—which is not shared by “Game”/”PUA”/”seduction community” or “ev psych”.
I agree with your explanation about some people’s preference for the label “agnostic”. The “atheism plus” on the other hand feels to me like “atheism plus political correctness”—it is certainly not focused on not offending religious people. (So an equivalent would be a Game blog who cares about not offending… for example Muslims. That’s not the same as a Game blog trying not to offend feminists.)
Anyone who liked the movie Matrix? (Unless all of them are already in the seduction community.) I could imagine to use the same word as a metaphor for… for example early retirement, or any similar activity that requires you to go against the stereotypical beliefs of most people. I admit I never saw the word used in this context; I just feel like it would fit there perfectly. (Also it would fit perfectly to most conspiracy theories.)
I don’t have that much knowledge of the Atheism Plus movement, but I have read some stuff that suggests they are concerned about how prominent atheists talk about Islam, at least. I also wouldn’t be at all surprised if they had expressed opposition to Dawkins’ description of religious upbringing as child abuse. I do know some feminists who were/are pissed about that.
I’m not necessarily disagreeing that the red pill writers are pursuing an effective strategy in disseminating their beliefs. To be honest, I can see it either way. On the one hand, offending people gets them to notice you, and emotionally charged arguments are more interesting. On the other hand, some of the rhetoric might needlessly alienate people, and to a certain extent it can discredit the ideas (e.g. someone recommends Athol Kay, someone says “isn’t he one of those red pill guys? I saw Roissy’s blog and it was appalling, no way I can listen to one of them”). I definitely don’t think that being deliberately offensive is literally the only way to spread a contrarian belief.
But I don’t think the red pill movement should be able to have their cake and eat it too. You can’t deliberately make your writing as offensive and obnoxious as possible in order to try to get it to spread, and then turn around and say “People are offended? This just shows that anyone who doesn’t think like the mainstream becomes a public enemy!”
Some movements are able to have their cake and eat it too. If a hundred years ago someone told the early feminists to be extra careful about not offending people, would they listen? Would it be a winning strategy?
I agree that it feels like people should choose between having their cake and eating it. But is this a description of how the world really works, or merely a just world fallacy? As a competing hypothesis, maybe it is all about power—if you can crush your enemies (for example make them unemployed) and give positions of power (and grant money) to your allies, then people will celebrate you as the force of good, because everyone wants to join the winner. And if you fail, the only difference between being polite and impolite is whether you will be forgotten or despised.
Let’s imagine that Athol Key would stop using the forbidden words like “Red pill” et cetera. What about the rest of his message? Would it stop feeling offensive for the “Blue pill” people, or not? If the blog would be successful, they would notice, and they would attack him anyway. (The linked article reacted to Athol’s description of a “red pill woman”, but would it be different if he just called her e.g. a “perfect woman”?) And if the blog would be unknown enough to avoid being noticed, then… it wouldn’t really matter what’s written there.
Compared with most blogs discussing the topic on either side, Athol Kay is extra polite. We can criticize him for not being perfect, while conveniently forgetting that neither is anyone else.
Um, the issue is not that he’s using “the red pill” or any other forbidden words, but that he’s expressly associating with and supporting a subculture of misanthropes, losers and misanthropic losers who happen to be using “The Red Pill” as their badge of honor. And yes, some people might still be offended by his other messages, even if he stopped providing this kind of enablement. But he would be taking their strongest argument against him off the table.
Just thinking… is loser a gendered word or not? Would you feel comfortable to describe a group of women as losers, on a public forum?
If not, then what would be the proper way to describe a subculture of women who are not satisfied with how society works now, who feel their options are limited by the society, who discuss endlessly on their blogs about how the society should be changed, and use some keywords as their badge of honor?
That’s an interesting question—I actually can’t think of any group where that would be an accurate description, so I don’t really have a good answer here. Sorry about that.
People who may or may not be on to something? Sure, lots of folks blame the failings of society for their comparative lack of success, and that’s sometimes unhelpful. But even that is a lot better than just complaining about how all other people—most specifically including women as well as ‘alpha male’ other guys—are somehow evil and stupid. That’s called sour grapes, and IMHO it is a highly blameworthy attitude, not least since it perpetuates and deepens the originally poor outcomes.
No real group, or even an imaginary group? I mean, take the “misanthropic losers” you described (and for the sake of debate, let’s assume your description of them is completely accurate), and imagine exactly the same group with genders reversed. Would it be okay to call those women publicly “losers”?
Or perhaps “loser” is a gendered slur. (Something like the word “slut” that you can use to offend women, but if you try it to describe a sexually adventurous man, it somehow does not have the same shaming power.) In which case, saying that the “Red Pill” readers are losers contains almost as much information as saying that they are men.
Complaining achieves nothing, and people who complain without achieving anything are, yeah, losers.
How about a group that achieves real results? For example, there is a controversial movement, in an obscure part of the “manosphere”, behind a blogger Valentine Solarius, often criticized by feminists for writing things like “to be female is to be deficient, cognitively limited”; “the female is completely egocentric, trapped inside herself, incapable of empathizing or identifying with others, or love, friendship”; “her intelligence is a mere tool in the services of her drives and needs”; “the female has one glaring area of superiority over the male—public relations; she has done a brilliant job of convincing millions of men that women are men and men are women”; “every woman, deep down, knows she’s a worthless piece of shit”. -- He writes a lot about his desire to kill women. Actually, he attacked and almost killed one woman for not responding to his e-mail, but she survived so he only spent three years in prison. He seems to be a popular person among some men politically influential in the Republican party… so, let’s assume that his friends really succeed to create a political movement, change the way society perceives women, change the laws as they want to have them, etc. Then, they would no longer be losers, would they? Now, would that be better than merely blogging about the “Red Pill”? (See his blog for some more crazy ideas.)
Well, we can imagine anything we want to. It’s not hard to think of a possible world where some loose subculture or organized group of women could be fairly characterized as “losers” on a par with redpillers. You could basically get there if, say, radical feminism was a lot more dysfunctional than it actually is. No such luck, though.
Perhaps you’re missing the point here? By “achieving real results”, I obviously don’t mean committing assault. Even successfully influencing politics would be a dubious achievement, as long as their basic ideology remains what it is. However, it is indeed a stylized fact in politics and social science that such nasty subcultures and movements generally appeal to people who are quite low either in self-perceived status/achivement, or in their level on Maslow’s scale of human needs.
Your quotes from the Manosphere blogger were quite sobering indeed, but I’ll be fair here—you can find such crazies in any extreme movement, so perhaps that’s not what’s most relevant after all. If most redpillers stuck to what they might perhaps be said to do best, e.g. social critiques about the pervasive influence of feminized thinking, the male’s unrecognized role as an economic provider and the like, as well as formulating reform proposals (however extreme they might be), I don’t think they would be so controversial. Who knows, they might even become popular in some underground circles who are quite fascinated by out-of-the-box thinking.
So, is it more about that he has loser friends than about what he writes? And by losers, I mean Greens.
Perhaps so, to some extent: you may like it or not, but guilt by association is a successful political tactic. But the problem is made even worse by the fact that his writings occasionally support the Greens’ nasty attitudes.
To take the analogy even further, imagine a respected scientist writing approvingly about “deep ecologists” and “Soylent Greens”, who believe in the primacy of natural wilderness, and argue that human societies are inherently evil and inimical to true happiness, excepting “naturally co-evolved” bands and tribes of low-impact hunter-gatherers. Such a belief might even be said to supported by evolutionary psychology, in some sense. But many people would nonetheless oppose it and describe it as nasty—notably including more moderate Greens, who might perhaps turn to other sciences such as economics, and think more favorably of “sustainable development” or even “natural capitalism”.
ALERT. Fully General Counterargument detected in line 1.
Seriously, how many people would actually refer to thoughtful critique and even rejection of mainstream views as “Red Pill” material? Basically nobody would, unless they are already committed to the “Red Pill” identity for unrelated reasons. That’s just not what Red Pill means in the first place.
And yes, the ‘Red Pill’ thing attracts jerks and losers, but that’s the least of its problems. A very real issue is that this ensures that ideas in the Red Pill space achieve memetic success not by their practical usefulness or adherence to truth-seeking best practices, but by shocking value and being most acceptable or even agreeable to jerks and losers.
Yes, you can go looking for diamonds in the mud: there’s nothing wrong with that and sometimes it works. But that does not require you, or anyone else, to provide enablement to such a deeply toxic and ethically problematic subculture.
Mencius Moldbug
Athol Kay
High quality PUA
etc.
Arguing about what a term means is bound to go nowhere, but in my experience, “red pill” has been associated with useful and interesting ideas. Maybe that’s just me and my experience isn’t valid though.
I don’t think it’s fair to characterize an entire space of ideas by it’s strawest members (shock-value seeking “edgy” losers). I could use that technique to dismiss any given space of ideas. See for example Yvain’s analysis of how mainstream ideas migrate to crazytown by runaway signalling games.
I think there is a high proportion of valuable ideas in the part of “redpillspace” that I’ve been exposed to. Maybe we are looking at different things that happen to be called the same name, though.
But based on your terminology and attitude here, I think you are cultivating hatred and negativity, which is harmful IMO. In general, I think it is much better to actively look for the good aspects of things and try to like more things rather than casting judgement and being outraged at more things.
Correct, I attempt to see the good parts of things and ignore the crud with full generality.
This is beside the point, IMHO; Moldbug’s references to “taking the red pill” are well explained by his peculiar writing style. I think they are mostly unrelated to how Athol Kay, reddit!TheRedPill and others use the term. OTOH, Multiheaded’s comment upthread provides proof that Kay’s views are genuinely problematic, in a way that’s closely related and explained by his involvement in TheRedPill meme cluster. For the time being, I make no claim one way or the other about other “high quality PUAs”.
Do also note that I really am criticizing a subculture and meme cluster here. AIUI, this has nothing to do with idea spaces in a more general sense, or even factual claims about the real world. Again, connotations and attitudes are what’s most relevant here. Moreover, I’m not sure what gave you the feeling that I am “cultivating hatred and negativity”, of all things. While it’s quite true that I am genuinely concerned about this subculture, because of… well, you said it already, the real issue here is Kay’s providing enablement to it, with the attendant bad effects. (Of course, this may also apply to other self-styled PUAs).
If you refer to the linked article, and by “proof” you mean “strawmanning and non-sequitur”...
Seriously: Imagine a comment or an article written in a similar tone on LW. How many votes would it get?
An example:
Where exactly in Athol’s article, or even anywhere on his website, did anyone say that anything about women’s decaying beauty over the age of 14? Citation needed!
Yeah, this is the argumentation style we refer to when saying “raising the sanity waterline”… not!
Who exactly is the manipulative hateful douchebag in this article? Are you sure it was Athol Kay?
Um, I think this is a silly argument, honestly. As the name makes reasonably clear, Man Boobz is a humor and satire website. Unlike most articles posted here at LW, they do not claim to qualify for any standard of rational argument. What’s useful about them is in their pointing to some of Athol Kay’s published opinions, and perhaps pointing out some undesirable connotations of these opinions.
Let me try to steel-man MB’s critique of this statement. Why is it especially important for a RPW to understand this—especially when the basic notion is clearly understood by any COSMO reader (which is a rather low standard)? Athol Kay does not explain how this understanding is supposed to pay rent in terms of improved results. And it is clear that, unless some special care is taken (which Athol Kay does not point out), a naïve interpretation of such “understanding” has unpleasant and unhelpful connotations.
Keep in mind that PUA/game works best when it manages to disrupt the mainstream understanding of “sexual market value” as opposed to accepting it uncritically, and the seduction community is successfully developing “girl game” methods which can allow women to be more successful in the market. By failing to point this out, Kay is under-serving Red Pill women especially badly.
This falls under Bastiat’s fallacy of “what is seen and what is not seen”. We see that divorce sucks; what we do not see is that divorce is nonethess rational whenever not divorcing would suck even more.
Strawmanning could be a technique used in humor and satire, but even then it isn’t a “proof” that someone’s views are “genuinely problematic”.
How about this: Two women in their 50s compare their husbands with men who were attracted to them when they were 18, and both see that their husband’s “market value” is lower. Let’s assume there is no other problem in the marriage; they just want to be maximizers, not merely satisficers.
One of them is a “Red pill woman”, she does not divorce and keeps a relatively good relationship. The other one is encouraged by success stories in popular media, gets a divorce… and then finds that the men who were interested in her when she was 18 are actually not interested anymore, and that she probably would have maximized her happiness by staying married. -- This is how the belief can pay its rent.
I wouldn’t advocate staying married for example in cases of domestic violence, and I guess neither would Athol Kay. So we are speaking about “sucking” in sense of “not being with the best partner one could be with”, right? In that case, understanding one’s “market value” is critical in determining whether staying or leaving is better. (By the way, a significant part of Athol’s blog is about how men should increase their “market value”, whether by exercise or making more money or whatever.)
And then, there is the impact on children. We should not expect that even if mommy succeeds to get a more attractive partner, that it will make them automatically happy. This trade-off is often unacknowledged.
The Red Pill on Reddit. Is this the one you’re talking about?
I’m not sure that the subreddit enjoys any sort of official status, but it’s certainly representative of what I’m talking about. Do note that the central problem with the RP meme cluster is one of connotation and general attitude, although some factual claims can definitely be problematic as well.
Come to think of it, even the name ‘Red Pill’ embodies all sort of irrationality and negative attitudes. Apparently, it is based on the very ancient idea that female period blood is in some sense a magical substance—so one can fashion a “Red Pill” out of it using sympathetic magick, and thus acquire some sort of occult or arcane knowledge which is normally exclusive to women and disallowed to men.
Have you seen The Matrix?
Well, you can find a lot of magickal or Neopagan symbolism in The Matrix if you know how to look for it. The word “matrix” itself means “something motherly” in Latin, and its use in the movie could be viewed as a reference to the Great Mother Goddess. (More specifically, the Great Mother is actually one archetype of the feminine Great Goddess of Neopaganism.)
Citation Requested.
The association is not a matter of packaging but content. The reductionist approach to one’s social life, the model of male and female sexual psychology he uses, etc. If he dropped all the “Red Pill” or “PUA” markers such as vocabulary, links or credits, he would still be identified with them by critics and advocates.
Can you point to some less blatantly biased commentary?
This might seem surprising, but I broadly agree with this assessment, except that I can’t tell what “stereotypical PUA writers” might mean in this context. The “Red Pill” is a very distinctive subculture which is characterized by wallowing in misogynistic—and most often, just plain misanthropic—attitude!cynicism (I’m using Robin Hanson’s “meta-cynical” taxonomy of cynicism here) about gender relations, relationships and the like. Its memes may be inspired by mainstream PUA and ev-psych, but—make no mistake here—it’s absolutely poisonous if you share the mainstream PUA goal of long-term self-improvement in such matters.
Karen Pryor’s Don’t Shoot the Dog.
Just kidding… sorta (Spoiler: It’s a book on behavior training.)
I am reading the textbook mentioned here. I find it enjoyable reading and it seems useful, but I have not applied any of it yet.
I believe that this is the book being referred to. I know that two of the authors are missing in the Amazon link, but they are present here—it appears that some of the authors were purged during the updating.
I wrote a (highly speculative) article on my blog, about the conversion of negative energy into the ordinary mass-energy.
http://protokol2020.wordpress.com/2013/07/07/the-menace-that-is-dark-energy/
I don’t expect mercy, though.
How much do you know about general relativity? (This is a honest question BTW—I know the postulates behind it and some of the maths, but I’ve never studied its implications in detail, besides the Schwarzschild metric and the FLRW metric, so I have trouble telling the levels above mine apart.)
I will pass the discussion about this here, I hope you understand that.
But please, fell free to engage me there, on my blog.
I’m afraid I’m not knowledgeable enough for that—I can’t tell whether non-trivial claims about GR are valid any more reliably than by noticing whether or not the person who made them sounds like a crackpot.
Yes, this Relativity an Quantum Mechanics debates usually ends like this: “I have not enough knowledge, but it seems you don’t have it either …”
Is this a reason to avoid them? Maybe not.
But this is why I decided to also write about that planet rotation thing. Where the scene is very transparent. Thousands of top experts have no clue.
Ben Goertzel will take your money and try put an AGI inside a robot.
Trigger warning: Those creepy semi-human robots that will make anyone who hasn’t spent months and months locked in a workshop making them do those human-imitating jerky facial gestures recoil in horror.
That page mentions “common sense” quite a bit. Meanwhile, this is the latest research in common sense and verbal ability.
That was hideous. Poor production values and a sloppy video that oozes incompetence.
Um, wow.
My eyes would be on this sort of thing if I wanted to keep up to date on serious AI. Demo video of the hardware here.
Hey everyone, long-time lurker here (I ran a LW group in Ft. Lauderdale, FL for about a year) and this is my first comment. I would like to post a discussion topic on a proposal for potential low-hanging fruit: fixing up Wikipedia pages related to LessWrong’s interests (existential risk, rationality, decision theory, cognitive biases, etc. and organizations/people associated with them). I’d definitely be interested in getting some feedback on creating a wiki project that focusing on improving these pages.
Is there a (more well-known/mainstream) name for arguments-as-soldiers-bias?
More specifically, interpreting an explanation of why or how an event happened as approval of that event. Or claiming that someone who points out a flaw in an argument against X is a supporter of X. (maybe these have separate names?)
Should we even call this a bias? They’re both unfortunate, but they’re also both reasonable Bayesian updates.
Good point. They are generally useful heuristics that sometimes lead to unnecessary conflicts.
We’ve been having beautiful weather recently in my corner of the world, which is something of a rarity. I have a number of side projects and hobbies that I tinker with during the evenings, all of them indoors. The beautiful days were making me feel guilt about not spending time outside.
So I took to going on bike rides after work, dropping by the beach on occasion, and hiking on weekends. Unfortunately, during these activities, my mind was usually back on my side projects, planning what to do next. I’d often rush my excursions. I was trying to tick the “outdoors” box so I could get back to my passions without guilt.
This realization fueled the guilt. I began to wonder how I could actually enjoy the outdoors, if both staying inside and playing outside left me dissatisfied.
What I realized was this: You don’t enjoy nice weather by forcing yourself outdoors. You enjoy nice weather by having an outdoor hobby, an outdoor passion that you pursue regardless of weather. Then when the weather is good, you enjoy it automatically and non-superficially.
Similarly:
You don’t become a music star by trying. You become a music star by wanting to make music.
You don’t become intelligent by trying. You become intelligent by wanting the knowledge.
It was a revelation to me that I can’t always take a direct path to the type of person I want to be. If I want to change the type of person that I am, I may have to adopt new terminal goals.
Wat? Methinks you have that backwards. “X reliably leads to Y, which I like, so I should like X” is reasonable “X reliably leads to Y, which I like, so I should adopt X as a terminal goal valuable regardless of what it gets me” is madness.
Mixing up your goal hierarchy is the path to the dark side.
Perhaps I did not adequately get my point across.
If you really want to be a music star, but you hate making music, you are in trouble. If after realizing this you still really want to be a music star, consider finding ways to modify your preferences concerning music creation.
We’re born with mixed up goal hierarchies. I’m merely pointing out that untangling your goal hierarchies can require changing your goals, and that some goals can be best achieved by driving towards something else.
Ok, let’s distinguish between your preferences as abstract ordering over lotteries over possible worlds, and preferences as physical facts about how you feel about something.
It is a bad idea to change the former for instrumental reasons. The latter are simply physical facts that you should change to be whatever the former thinks would be useful.
That probably clears up the confusion.
I would agree completely, if humans were perfect rationalists in full control of their minds. In my (admittedly narrow) experience, people who have the creation of art / attainment of knowledge as a terminal goal usually create better art / attain more knowledge than people who have similar instrumental goals.
I am indeed suggesting that the best way to achieve your current terminal goals may be to change your preference ordering over lotteries over possible worlds. If you are a young college student worried about the poor economy, and all you really want is a job, you should consider finding a passion.
Now, you could say that such people don’t really have “get a job” as a terminal goal, that what they actually want is stability or something. But that’s precisely my point: humans aren’t perfect rationalists. Sometimes they have stupid end-games. (Think of all the people who just want to get rich.)
If you find yourself holding a terminal goal that should have been instrumental, you’d better change your terminal goals.
Ok. I disagree. I tried to separate what you want in the abstract form the physical fact of what this piece of meat you are sending into the future “wants” but then you went and re-conflated them. I’m tapping out.
For what it’s worth, I don’t think we disagree. In your terminology, my point is that people don’t start with clearly separated “abstract wants” and “meat wants”, and often have them conflated without realizing it. I hope we can both agree that if you find yourself thus confused, it’s a good idea to adjust your abstract wants, no matter how many people refer to such actions as a “path to the dark side”.
(Alternatively, I can understand rejecting the claim that abstract-wants and meat-wants can be conflated. In that case we do disagree, for it seems to me that many people truly believe and act as if “getting rich” is a terminal goal.)
You used the phrase ‘terminal goals’. This describes adopting an instrumental goal. Nyan’s criticism applies.
I disagree. It seems to me that people who have music creation as a terminal goal are more likely to create good music than people who have music creation as an instrumental goal. Humans are not perfect rationalists, and human motivation is a fickle beast. If you want to be a music star, and you have control over your terminal goals, I strongly suggest adopting a terminal goal of creating good music.
I suggest that you abandon the word ‘terminal’ and simply speak of goals. You are using the word incorrectly and so undermining whatever other point you may have had.
What do you think the word “terminal” means in this context, and what do you think I think it means?
Edit: Seriously, I’m not being facetious. I think I am using the word correctly, and if I’m not, I’d like to know. The downvotes tell me little.
In local parlance, “terminal” values are a decision maker’s ultimate values, the things they consider ends in themselves.
A decision maker should never want to change their terminal values.
For example, if a being has “wanting to be a music star” as a terminal value, than it should adopt “wanting to make music” as an instrumental value.
For humans, how these values feel psychologically is a different question from whether they are terminal or not.
See here for more information
Thanks. Looks like I was using the word as I intended to.
My point is that humans (who are imperfect decision makers and not in full control of their motivational systems) may actually benefit from changing their terminal goals, even though perfectly rational agents with consistent utility functions never would want to.
Humans are not always consistent, and making yourself consistent can involve dropping or acquiring terminal goals. (Consider a converted slaveowner acquiring a terminal goal of improving quality of life for all humans.)
My original point stems from two observations: Firstly, that many people seem to have lost purposes where their terminal goals should be. Secondly, that some humans may find it difficult to “trick” their goal system.
You may find it easier to achieve “future me is a music star” by sending a version of yourself with different terminal goals (wanting to make music) into the future, as opposed to sending a version of you who makes music for fame’s sake. (The assumption here is that the music you make in the former case is better, and that you don’t have access to it in the latter case, because humans find it difficult to trick their goal system.)
This is somewhat related to purchasing warm fuzzies. There are some things you cannot achieve by willpower alone. In order to achieve your current terminal goals, you may need to change your terminal goals.
I realize that this is a potentially uncomfortable conclusion, but I reject wedrifid’s claim that I was misusing the word.
Get some pot-plants and put a sunlamp on your desk. Then every day is a nice day, and you can stop this “outside” nonsense. :P
A really bright daylight-spectrum desk lamp does make things lovely.
Anyone around here familiar with Stoicism and/or cognitive-behavioural therapy? I am reading this book and it seems vaguely like it would be of relevance to this site. Especially the focus of training the mind to make something of a habit like questioning whether something is ultimately in our control or not.
Also, I am kind of sad that there is nothing around here like a self-study guide that is easily accessible for the public.
And finally, I am confused again and again why there are so many posts about epistemic rationality and so few about instrumental rationality. The former helps me less to win than the latter. Or maybe I am wrong about the purpose of this site.
Post post scriptum: In light of current revelations about the NSA I would be very happy about this site offering https to protect passwords and to obfuscate the specific viewed content.
As a psychotherapy, CBT is the only psychotherapy with evidence of working better than just talking with someone for the same length of time. (Not to denigrate the value of just attention, but e.g. counselors are way cheaper than psychiatrists.) It seems to work well if it’s guided, i.e. you have an actual therapist as well as the book to work through.
I don’t know how it is for people who aren’t coming to it with an actual problem to solve, but for self-knowledge as a philosophical end, or to gain the power of hacking themselves.
Curiosity: How much cheaper?
I’ve felt like I could benefit from therapy from time to time, but I hate dealing with doctors and insurance.
Hard to generalise internationally—but non-medical counselors charge like jobs that involve paying attention to someone, whereas psychiatrists charge like specialist doctors (which they are). I was mostly thinking in terms of public funding for medicine, where bang for the buck is an eternal consideration.
Probably because teaching instrumental rationality isn’t to the comparative advantage of anyone here. There’s already tons of resources out there on improving your willpower, getting rich, becoming happier, being more attractive, losing weight, etc. You can go out and buy a CBT workbook written by a Phd psychologist on almost any subject—why would you want some internet user to write up a post instead?
Out of curiosity, what type of instrumental rationality posts would you like to see here?
Then linking to it would be interesting. I can’t reasonably review the whole literature (that again reviews academic literature) to find the better or best books on the topics of my interest.
So many self-help books are either crap because their content is worthless or painful to read because they have such a low content-to-word ratio for any reasonable metric. I want just the facts. Take investing as an example: It can be summarized in this one sentence “Take as much money as you are comfortable with and invest it in a broad index fund, taking out money so to come out with zero money at the moment of your death, except if you want to leave them some money.” And still there is a host of books from professional investors detailing technical analysis of the most obscure financial products.
Have reading groups reviewing books of interest. Post summaries of books of interest or reviews. Discuss the cutting edge of practical research, if relevant to our lifes. This is staying with your observation that most practically interesting stuff is already written.
Moving on, we know about all kinds of biases. We also know that some of those biases are helped by simply knowing about them, some are not. For the latter you need some kind of behavioural change. I do not know about books helping with that.
I know that this post is not precise and it can’t be, as it explores what could be. If I knew exactly what I wanted, I would aready get it, it is a process of exploring.
I’ve found that “just the facts” doesn’t really work for self-help, because you need to a) be able to remember the advice b) believe on an emotional, not just rational level that it works and c) be actually motivated to implement the advice. This usually necessitates having the giver of advice drum it into you a whole bunch of different ways over the course of the eight hours or so spent reading the book.
One problem with this is that “reviewing” self-help books is hard because ultimately the judge of a good self-help book is whether or not it helps you, and you can’t judge that until a few months down the road. Plus there can be an infinity of confounding factors.
But I can see your point. Making practical instrumentality issues more of a theme of the conversation here is appealing to me. Cut down on the discussion of boring, useless things (to me, of course) like Newcomb’s problem and utility functions and instead discuss how to be happy and how to make money.
However, I have seen a few people complain about how LessWrong’s quality is deteriorating because the discussion is being overrun with “self-help”. So not everyone feels this way, for whatever reason.
Very true and a good observation. My reading of stoic practice informs this further: They had their sayings and short lists of “just the facts” but also put emphasis on their continuous practice. Indeed, my current critique of lesswrong is based on this impression. But to counter your point: I had things like Mister Money Moustache in mind where multiple screen pages are devoted to a single sentence of actual advice. I dislike that just as I don’t like Eliezer’s roundabout way of explaining things.
This can be helped by stating the criteria in advance. A few of the important criteria, at least for me, are correctness of advice, academic support, high information density and readability. So some kind of judgement can be readily made immediately after reading the book. Or a professional can review the book regarding it’s correctness.
However, I have seen a few people complain about how LessWrong’s quality is deteriorating because the discussion is being overrun with “self-help”. So not everyone feels this way, for whatever reason.
My suggestion is/was to seperate the discussion part of lesswrong in two parts: Instrumental and epistemic. That way everyone gets his part without reading too much, for them, unnecessary content. But people are opposed to something like that, too. Fact is, the community here is changing and something has to be done about that. Usually people are very intelligent and informed around here so I would love to hear their opinions on issues that matter to me.
Maybe we should have a “Instrumental Rationality Books” thread or something, similar to the “best textbooks” thread but with an emphasis on good self-help books or books that are otherwise useful in an everyday way.
That sounds like a good idea. I might make it in the next few days if no one else does.
This assumes that you now when you will die and can predict in advance how interest rates will vary over the future. It also ignores akrasia issues.
A group of internet users could discuss an existing book or a group of books, and say for example: “this part worked for me”, “this part didn’t work for me”, “I did this meta action to not forget using this part”, “here is a research that disproves one of the assumptions in the book” etc. They don’t have to replace the books, just build on them further.
Seems to be that many books are optimized (more or less successfully) to be bestsellers. A book that actually changes your life will not necessarily be more popular than a book that impressed you and makes you recommend it to your friends, even if your life remains unchanged or if the only change is being more (falsely) optimistic about your future successes.
I feel the same way about stoicism as I do about Buddhism, there’s some good stuff but its hard to separate out from the accumulated mystical detritus. The advantage of modern psychology is it tends to include the empirically supported parts of these traditions.
As for CBT, I’ve personally had extremely good experience with Introducing Cognitive Behavioural Therapy: A Practical Guide.
There is moodgym.
Hello and welcome to Phoenix Wright: Ace Economist.
Briefly, Phoenix Wright: Ace Attorney is a series of games where you play as Phoenix Wright, an attorney, who defends his client and solves crimes. Using a free online application that lets you make your own trials, I’ve turned Phoenix Wright into an economist and unleashed him upon the world.
I’m posting it here just in case it interest anyone. The LessWrong crowd is smart and well-educated, and so I’d appreciate any feedback I can get from you fine folk.
Play it here (works best in Firefox):
http://aceattorney.sparklin.org/jeu.php?id_proces=49235
Although I’m using Ace Attorney: Online as a medium of expression here, this is not a normal Phoenix Wright game. This trial is actually intended to explain in a more fun and friendly format the ideas contained in an academic paper I wrote about economics (a paper which has been read by the professional economist and top econ blogger Tyler Cowen, among other people). So while there’s testimonies and cross-examinations, you’re not really solving a crime here so much as reading a Socratic dialogue of sorts about economics. It’s been playtested for bugs, but let me know if you catch anything I missed.
Let it load. The first few frames are supposed to be just black with dialogue, but if they’re still that way after the green text with the date and time, just wait till it loads. Parts of the game might look weird because the background will be partially loaded.
Gameplay is simple. You click on the arrow to make the dialogue progress. Don’t press too fast or every now and then you’ll miss a piece of dialogue. A few times you’ll be asked questions. Pick the right answer. Sometimes you’ll have to pick the right evidence to present. Pick the correct evidence and click present.
During cross examinations (when the big arrow splits into a two smaller forward and backwards arrows), you can move backwards and forwards between the pieces of the testimony. Press “press” to ask a question. This is always a good idea. Occasionally you’ll need to present the right evidence at the right part of the testimony to advance, but be careful: present the wrong evidence at the wrong part of the testimony and you’ll incur a penalty. Too many penalties and you lose.
It’s still missing music (in my defense, my aged computer is not able to play sound right now, so I couldn’t select any music), but I hope that doesn’t prevent you from playing the game and learning something from the dialogue.
I don’t expect this to interest all of you, but if you find economics interesting give it a go. The worst that happens is that you waste half an hour of your life playing a game on the internet—like you’ve never done that before, amirite?
I would appreciate both any comments you have about improving the trial, gameplay, and writing, and what you think about the subject of the Socratic dialogue, both your own thoughts and your comments on the arguments presented in the trial. etc. I particularly need help steelmanning the prosecution.
This is part one of three. The other two parts are in progress. They are similar to the first part but advance and draw out the implications of the argument presented in part one.
Enjoy.
Cool. I haven’t played the Ace Attorney games in a while, but I’ll check this out.
I’m trying to decide whether to marry someone, but I’m having a lot of trouble deciding. Anyone have any advice?
1) do you plan on spending a long period of time in a relationship with someone?
2) you have a job where they will get benefits from being married to you or vice versa?
3) do you expect to have children or buy property soon?
4) do you hang out with people who care whether or not you’re married rather than just a long-term couple?
5) do you expect the other person to ever leave you and take half your stuff?
6) do you want to have a giant ceremony?
7) do you live in a country where you get tax credits or something for being married?
8) do you expect yourself or them to act differently if “married” or not?
9) do you have the money to blow on a wedding?
10) is there any benefit to getting married soon over later? If you expect to be together in several years as a married couple, can you just stay together a year and THEN get married?
These are some useful questions off the top of my head for this situation.
Don’t forget to include the probability of a divorce (use outside view) and likely consequences.
Ain’t that the 5 in drethelin’s list?
Oops, I somehow skipped that one.
Other than in special circumstance, I think marriage is one of these occasions where “having trouble deciding” pretty clearly means “NO”.
It could also mean “Not now”.
If in doubt, don’t. There is rarely a good reason to formalize a relationship these days until you are absolutely sure that he/she is the one.
You might be interested in the textbook that I recommended here, which includes some general information about patterns in relationships that predict how-long-people-that-are-married-stay-married.
I am aware that I am recommending a 500 page textbook in response to your request for advice, and that this is kinda absurd. I am not familiar enough with the material to be able to (given the amount of effort that I am willing to dedicate to the task) summarize the relevant information for you, but figured that the link would be literally better than nothing.
Are you already married? What do your current spouses say?
While funny as jests go, your reply sounds rather condescending in the “transhumanists are better than muggles” sort of way. Unless I misunderstand your point.
I am not currently married.
Start with a list :-)
First figure out why you’re trying to decide that (the pros) and write it down. Then figure why you haven’t decided yet (the cons) and write those down.
If writing them down isn’t enough, try to figure out a way to put numbers on each item. (Exactly what kind of numbers depends on you, and figuring that is part of the solution.)
If that doesn’t work, then ask for help, with the list.
How credible is the research (that forms the inspiration) of this popularisation? The subject is the effect of status on antisocial behaviour and soforth. Nothing seemed particularly surprising to me but that may be confirmation bias with respect to my general philosophy and way of thinking.
So, there’s this multiplayer zombie FPS for the blind called Swamp, and the developer recently (as in the past few months) added an AI to help with the massive work of banning troublemakers who use predictable methods to subvert bans. Naturally, a lot of people distrust the AI (which became known as Swampnet), and it makes a convenient scapegoat for all the people demanding to be unbanned (when it turns out that they did indeed violate the user agreement).
In the past 24 hours, several high-status, obviously innocent players started getting banned. I predicted that someone was using their passwords, while everyone else went on about how Swampnet is clearly unreliable. I was tempted to throw around terms like dictionary attack, but decided against making such a specific prediction, especially without fully understanding dictionary attacks myself.
The developer confirmed that someone had been grabbing people’s passwords to link them to his (banned) account, which Swampnet uses to treat them as the same person. He also confirmed that the number of tries involved meant the villain was not brute-forcing it, but also that he hadn’t hacked the server or intercepted data packets, making him wonder if there isn’t some obvious list of passwords being shared or something.
Meta: I probably shouldn’t feel as good about outpredicting everyone and wisely avoiding getting too specific as I do. If I’d outpredicted the majority of, say, LWers, then it would feel way more justified, but that community’s selection pressures are not directed toward prediction power.
Addendum: I reread the discussion, and I treated the first one as a possible bug, but after the second clearly innocent banning, I decided it must be a hacker, and even jumped online a few times to see if I’d been hit. Posting only because I was afraid I’d used the word immediately in referring to my prediction (I did, and edited it out accordingly), when it took two datapoints for me to update to the successful prediction.
Has anyone read Dennett’s Intuition Pumps? I’m thinking of reading it next. The main thing I want to know: does he offer new ways of thinking which one can actually apply while thinking about (a) everyday situations and (b) math and physics (which is my work).
Read and reviewed. I’d get it from a library and take a few notes, but not buy it. The book is a mix of practical habits for everyday situations, explanations of how computers and algorithms work, high-level problems in philosophy of consciousness.
If you’re simply looking for better ways to use thought experiments in everyday life, you can bail out after the first few sections.
Thanks! Your review was very helpful. Especially when you pointed out that the examples he uses to demonstrate his intuition pumps are in highly abstract and non-everyday scenarios. That was exactly what I was worried about: even if I pick up a more sophisticated vocabulary to handle ideas, I’d have to try to come up with many examples myself in order to internalize it (though, it’d probably be worth it).
I’m only about one-quarter of the way into it. So I’m not so sure about your questions; but I expect that I’d suggest it as a more-philosophical, less-empirical companion to Kahneman’s Thinking, Fast and Slow as an introduction to This Sort Of Thing. A lot of it does seem to have the summary nature, which is review for anyone not new to the subject; for instance, there’s yet another intro to Conway’s Life in (IIRC) one of the appendices. But it’s intended as an introductory book.
I can imagine a pretty good undergraduate “philosophy, rationality, and cognition” course using this book and Kahneman (among others). A really interesting course might use those two, Drescher’s Good and Real, and maybe Gary Cziko’s Without Miracles to cover evolutionary thinking ….
Is it possible to train yourself the big five in personality traits? Specifically, conscientiousness seems to be correlated with a lot of positive outcomes, so a way of actively promoting it would seem a very useful trick to learn.
Note: The following post is a cross of humor and seriousness.
After reading another reference to an AI failure, it seems to me that almost every “The AI is an unfriendly failure” story begin with “The Humans are wasting too many resources, which I can more efficiently use for something else.”
I felt like I should also consider potential solutions that look at the next type of failure. My initial reasoning is: Assuming that a bunch of AI researchers are determined to avoid that particular failure mode and only that one, they’re probably going to run into other failure modes as they attempt (and probably fail) to bypass that.
For instance: AI Researchers build an AI that gains utility roughly equivalent to the Square Root(Median Human Prolifigacy) times Human Population times Time, and is dumb about Metaphysics, and has a fixed utility function.
It’s not happier if the top Human doubles his energy consumption. (Note: Median Human Prolifigacy)
It’s happier, but not twice as happy when Humans are using Twice as many Petawatthours per Year (Note: Square Root: This also helps prevent 1 human killing all other humans from space and setting the earth on fire be a good use of energy. This Skyrockets the Median, but it does not skyrocket the Square Root of the Median nearly as much.)
It’s five times as happy if there are five times as many Humans, and ten times as happy when Humans are using the same amount of energy for year for 10 years as opposed to just 1.
Dumb about metaphysics is a reference to the following type of AI failure: “I’m not CERTAIN that there are actually billions of Humans, we might be in the matrix, and if I don’t know that, I don’t know if I’m getting utility, so let me computronium up earth really quick just to run some calculations to be sure of what’s going on.” Assume the AI just disregards those kinds of skeptical hypotheses, because it’s dumb about metaphysics. Also assume it can’t change it’s utility function, because that’s just too easy too combust.
As I stated, this AI has bunches of failure modes. My question is not “Does it Fail?” but “Does it even sound like it avoids having eat humans, make computronium be the most plausible failure? If so, what sounds like a plausible failure?”
Example Hypothetical Plausible Failure: The AI starts murdering environmentalists because it fears that environmentalists will cause an overall degradation in Median human energy use that will lower overall AI utility, and environmentalists also encourage less population growth, which further degrades AI utility, and while the AI does value the environmentalists human energy consumption which boosts utility, they’re environmentalists, so they have a small energy footprint, and it doesn’t value not murdering people in of itself.
After considering that kind of solution, I went up and changed ‘my reasoning’ to ‘my initial reasoning’ Because at some point I realized I was just having fun considering this kind of AI failure analysis and had stopped actually trying to make a point. Also, as Failed Utopia 4-2 points out in http://lesswrong.com/lw/xu/failed_utopia_42/ designing more interesting failures can be fun.
Edit for clarity: I AM NOT IMPLYING THE ABOVE AI IS OR WILL CAUSE A UTOPIA. I don’t think it it could be read that way, but just in case there are inferential gaps, I should close them.
Really? I think the one I see most is “I am supposed to make humans happy, but they fight with each other and make themselves unhappy, so I must kill/enslave all of them”. At least in Hollywood. You may be looking in more interesting places.
Per your AI, does it have an obvious incentive to help people below the median energy level?
To me, that seems like a very similar story, it’s just their wasting their energy on fighting/unhappiness. I just thought I’d attempt to make an AI that thinks “Human’s wasting energy? Under some caveats, I approve!”
I made a quick sample population to run some numbers about incentives (8 people, using 100, 50, 25,13,6,3,2,1 energy, assuming only one unit of time) and ran some numbers to consider incentives.
The AI got around 5.8 utility from taking 50 energy from the top person, giving 10 energy to use to the bottom 4, and just assuming that the remaining 10 energy either went unused or was used as a transaction cost. However, the AI did also get about .58 more Utility from killing any of the four bottom people, (even assuming their energy vanished)
Of note, roughly doubling the size of everyone’s energy pie does get a greater amount of Utility then either of those two things (Roughly 10.2), except that they aren’t exclusive: You can double the Pie and also redistribute the Pie (and also kill people that would eat the pie in such a way to drag down the Median)
Here’s an even more bizzare note: When I quadrupled the population (giving the same distribution of energy to each people, so 100x4, 50x4, 25x4, 13x4, 6x4,3x4, 2x4, 1x4) The Algorithm gained plenty of additional utility. However, the amount of utility the algorithm gained by murdering the bottom person skyrocketed (to around 13.1) Because while it would still move the Median from 9.5 to 13, the Squareroot of that Median was multiplied by a much greater population than when Median was multiplied by a much greater population. So, if for some reason, the energy gap between the person right below the Median and the person right above the Median is large, the AI has a significant incentive to murder 1 person.
In fact, the way I set it up, the AI even has incentive to murder the bottom 9 people to get the Median up to 25.… but not very much, and each person it murders before the Median shifts is a substantial disutility. The AI would have gained more utility by just implementing the “Tax the 100′s” plan I gave earlier than instituting either of those two plans, but again, they aren’t exclusive.
I somehow got: Murder can be justified, but only of people below the median, and only in those cases where it Jukes the median sufficiently, and in general helping them by taking from people above the median is more effective, but you can do both.
Assuming a smoother distribution of energy expenditures in the population of 32 appeared to limit this problem from happening. Given a smoother energy expenditure, the median does not jitter by so much when a bottom person dies and Murdering bottom people goes back to causing disutility.
However, I have to admit that in terms of Novel ways an algorithm could fail, I did not see the above coming: I knew it was going to fail, but I didn’t realize it might also fail in such an oddly esoteric manner in addition to the obvious failure I already mentioned.
Thank you for encouraging me to look at this in more detail!
Note that killing people is not the only way to raise the median. Another technique is taking resources and redistributing them. The optimal first-level strategy is to only allow minimum-necessary-for-survival to those below the median (which, depending on what it thinks “survival” means, might include just freezing them, or cutting off all unnecessary body parts and feeding them barely nutritious glop while storing them in the dark), and distribute everything else equally between the rest.
Also, given this strategy, the median of human consumption is 2×R/(N-1), where R is the total amount of resources and N is the total amount of humans. The utility function then becomes sqrt(2×R/(N-1)) × N × T. Which means that for the same resources, its utility is maximized if the maximum number of people use them. Thus, the AI will spend its time finding the smallest possible increment above “minimum necessary for survival”, and maximize the number of people it can sustain, keeping (N-1)/2 people at the minimum and (N-1)/2+1 just a tiny bit above it, and making sure it does this for the longest possible time.
Well, even if it turned out to do exactly what it’s designers were thinking (wich it won’t), it’d still be unfriendly for the simple fact that no remotely optimal future likely involve humans with big energy consumptions. The FAI almost certainly should eat all humans for computronium, the only difference is the friendly one will scan their brains first and make emulations.
You get an accurate prediction point for guessing that it wouldn’t do what it’s designers were thinking: Even if the designers assumed it would kill environmentalists (and so assumed it was flawed), A more detailed look as Martin-2 encouraged me to do found that it also finds murder to be a utility benefit in at least some other circumstances.
The Good Judgement Project is using the Brier score to rate participants forecasts. This is not LW’s usual preferred scoring system (negative log odds); Brier is much more forgiving of incorrect assignments of 0 probability. I checked the maths, and you’re expected score is still minimised by honestly reporting your subjective probabilities, but are there any more subtle ways to game the system?
Perhaps it encourages one to make long-shot bets? If you aren’t penalized too badly for P=0 events happening, this suggests that short-selling contracts at ~1% may be better than it looks.
Is there a name for the bias that information can just happen, rather than having to be derived by someone using some means?
You might be after the ‘myth of the given’, which is Wilfred Sellars’ coinage in Empiricism and the Philosophy of Mind. ‘Given’ is just the english translation of ‘datum’, and so the claim is something like ‘It is a myth that there is any such thing as pure data.’
The slightly more complicated point is that foundationalist theories of empiricism (for example) involve the claim that while most knowledge is justified by inferences of some kind, there is a foundation of knowledge that is justified simply by the way we get it (e.g. through the senses, intellectual intuition, etc.). Sellars’ argues that no such foundation is possible, and so far as I can tell his argument is more or less accepted today, for whatever that’s worth.
Hm. One interpretation sounds like the philosophical position of a priori knowledge,* but you might mean knowledge existing independent of a mind, which I don’t know of a shorter phrase to describe.
*I think this is actually somewhat well validated, under the name of “instinct,” and humans appear to have lots of instincts.
One example would be that people tend to think that their senses automatically give them information, while in fact senses and their interpretation is a very complex process.
Another would be (from what Root-Bernstein says) that very good scientists are fascinated by their tools—they’re the ones who know that the tool might not be measuring what they think it’s measuring.
And indeed, to capture this notion is why Kant made the distinction between analytic and synthetic a priori knowledge in the first place.
Instincts wouldn’t be a case of a priori knowledge, I think just because they couldn’t be considered a case of knowledge. But at any rate, ‘a priori’ doesn’t mean ‘innate’, or even ‘entirely independent of experience’. A priori knowledge is knowledge the truth of which does not refer to any particular experience or set of experiences. This doesn’t imply anything about whether or not it’s underived or anything like that: most people who take a priori knowledge to be a thing would consider a mathematical proof a case of a priori justification, and those are undoubtedly derived by some particular person at some particular time using some particular means. (I’m not endorsing the possibility of a priori knowledge, just trying to clarify the idea).
Seems like a version of the Illusion of transparency (possibly in reverse):
What you describe is more like “a tendency for people to overestimate the degree to which” their senses are accurate and assume that they are a true representation of external reality.
Second the question.
Anyone have a good recommendation for an app/timer that goes off at pseudo-random (not too short—maybe every 15 min to an hour?) intervals? Someone suggested to me today that I would benefit from a luminosity-style exercise of noting my emotions at intervals throughout the day, and it seems like something I ought to automate as much as possible
It takes a bit of work to set up, but Tagtime does both the notifications and the logging
Thanks! Downloaded it, will report back after trying for a bit
Why to the maps of meetups on the front page and on the meetups page differ? Why do neither of them show the regular meetups?
Does anyone know anything about yoga as a spiritual practice (as opposed to exercise or whatever)? I get the sense that it’s in the same “probably works” category as meditation and I’d be interested in learning more about it, but I don’t know where to start, and I feel like there’s probably “real” yoga and “pop” yoga that I need to be able to differentiate between.
Also, I can’t sit in any of the standard meditation positions—I can only do maybe five minutes indian-style before I get intense pain. When I ask people how to remedy this, they tell me “do yoga”, but aren’t any more specific than that.
If someone knowledgeable could point me towards a good starting point or a resource, that would be great.
A local yoga course. Having a teacher that can tell you what you are doing wrong is very valuable.
When it comes to meditation the same applies. Go to a local Buddhist tempel and let them guide you in learning meditation.
Taoist meditation is done either standing or sitting in a chair.
Source: I’ve read a moderate amount about this, so there may be exceptions.
I did standing meditation from Lam Kam Chuen’s The Way of Energy for a while, and cleared up a case of RSI.
I know that meditation is possible while sitting in a chair, and I do it about half the time (the other half I sit on the ground sort of like this, just because I like it). I kind of want to be able to do it the standard way so I can fulfill an irrational urge to “feel like a real Buddhist”, which I think would motivate me.
This is deeply funny. Buddhism is about getting rid of urges.
Secondly seiza is also a position in which a lot of buddhist meditate and sitting that way is usually easier.
Thirdly it seems like you somehow try to do Buddhism on your own without a teacher when having a in person teacher is a core element of Buddhism.
Go see a doctor and don’t leave until you get a specific diagnosis or treatment.
Careful. Sometimes the treatment can be worse than the disease.
Are you implying that something is very wrong with me if I can’t sit Indian style and that I should see a doctor right away, or are you just saying that this would be an effective way to solve my problem?
Effective way. You obviously have some kind of problem that other people don’t have that gives you discomfort without any obvious way to solve it. Seeing a doctor helps to rule out some underlying, organic problem. I don’t know about very wrong, but being able to sit only five minutes indian style seems very low.
Oh okay, for some reason when I first read your comment I got a sense of urgency from it. Thanks for clarifying.
I’ve seen yoga books which explain how to ease into sitting in full lotus.
I was going on the description of “intense pain”. I know from personal experience that you need to ease into the lotus but I never experienced anything that I would describe as “intense pain”, at most “mild to moderate discomfort” after five minutes. Anyway, gothgirl420666 was having a problem without any obvious solutions, as evidenced by his lack of proposed solutions by his peers, so I suggested to pay a visit to a professional with extensive domain knowledge.
When you say “Indian style” do you mean with your feet under your thighs or on top of them?
Under.
That’s more of a physical limitation that I first interpreted you as meaning. Still, I’m not going to put it in the “OMG, must be solved” category.
Feldenkrais Method (a approach of gentle repeated movements to increase physical awareness and coordination) might be a good idea. Somatics by Thomas Hanna has a daily cat stretch which takes about ten minutes to do, and as I recall, about two hours to learn.
A little extra explanation: I’ve found that knee and hip problems can actually be a result of a tight lower back, and Feldenkrais can help.
What kind of a doctor?
what does the ‘add a friend’ feature on this site actually do?
Controls whose posts appear at http://lesswrong.com/r/friends/ . (Only posts are shown, not comments.)
I never noticed it until now. I’m curious to know how many people use it.
Adds a friend.
Running an interest check for an “Auto-Bayes.”
Something I’ve noticed when reading articles on the web is that I occasionally run across the same beliefs, but have completely forgotten my last assigned probability—my current prior. In order to avoid this, I’m writing a program that keeps track of a database of beliefs and current priors, with automated Bayes updating. If nothing else, it’ll also make it easier to get statistics on how accurate my predictions are, and keep me honest.
Anyway, I got halfway started and realized that this might be something other people might be interested in, so: interest check!
Do animal altruists regard zoos as a major contributor to animal suffering? Or do the numbers not compare when matched up against factory farming and the like?
While I don’t know what animal altruists think, these statistics might give an (extremely) rough idea of the numbers.
(the second one is only cattle and doesn’t distinguish between human/inhuman conditions though 80-90% of cattle are in feedlots with >1000 heads, so you could draw some order-of-magnitude comparisons)
(Longpost warning; I find myself wondering if I shouldn’t post it to my livejournal and just link it here.)
A few hours shy of a week ago, I got a major update to my commercial game up to releasable standards. When I got to the final scene, I was extremely happy—on a scale of 1=omnicidally depressed to 10=wireheading, possibly pushing 9 (I’ve tried keeping data on happiness levels in April/May and determined that I’m not well calibrated for determining the value of a single point).
That high dwindled, of course, but for about 24 hours it kept up pretty well.
Since then, I’ve been thoroughly unable to find anything I feel motivated enough to actually work on. I’ve come close on a couple projects, but nothing ever comes of them. So for the most part, the past week has been right back into the pits of despair. If I’m not noticeably accomplishing something, I’m averaging 3-4 or so on the above scale (I haven’t been recording hourly data in the past week). Mostly, the times when I manage to get up around 5-6 are when I’m able to go off and think about something; when I actually try to do anything on the computer, it all drops rapidly.
So far, my method for finding something to work on has been pretty feeble. “Seek out something among the projects we’ve already identified as worth pursuing; if failed, let mind wander and hope something sticks.” The major update that I managed to work on for the previous two or so weeks arose from an idea not among any of the projects I had in mind (in a round-about way, it came from someone’s Facebook status); more ideas grew from it, until I decided to just add them to the existing game, since they fit there about as well as in something new, and would force me to make some long-needed improvements.
That game itself had its origins in a similar situation; I was trying to work on a different but related project, and complained about the impenetrable Akrasiatic barrier to the very same person whose status spawned the recent updates. He made a vague suggestion, I was able to start on it, and the project grew out of that, and was easy enough to edit that it continued expanding.
This does seem to apply primarily to game development; music/fiction don’t seem to follow this trend that I’ve noticed. At most, I wind up defining a few classes for what I want to work on, and in the best cases make some menus but don’t really do much if any testing of the game’s engine. The things that do get done are usually just tiny, non-serious things done on a whim that can evolve into something more serious if the earliest results are pleasant enough.
This sucks and I want to change it and have no idea how to do so. Accomplishment = superhappy and unpredictable, non-accomplishment = depressedly coasting until something happens. Success spirals only seem to work over a very brief interval mid-awesome, assuming I can be distracted from said awesome long enough to do something else worthwhile (as happened the first time I marathonned HPMoR and The Motivation Hacker ; it’s much harder to get a success spiral out of awesome spawned from work, since I’m much less willing to take the risk of turning away from the work for any longer than it takes to remain functional. ).
Just trying to think of some possible ideas...
How much time, by the clock, have you spent trying to think of different things you could be doing? If you haven’t, it could be helpful to just sit down and brainstorm as much stuff as you can.
Also, maybe doing something fairly easy but that seems “productive” could be helpful in starting a success spiral getting you back up to your previous speeds; possibly online code challenges or something like that.
Or maybe you should be trying to draw on other things that could make you happier, like hanging out with friends.
I haven’t committed any numbers to memory, but my time is mostly divided between trying to think my way to doing something and trying to avoid drowning in frustration by wasting time on the internet. Just judging by how today has gone so far, it seems to be roughly 1:2 or 1:3 in favor of wasting time. I did briefly turn off the internet at one point, and that seemed to help some, although I still didn’t manage to make good use of that time.
I have no such opportunities of which I am aware.
I recommend poking around in your mind to find out what’s actually in your mind, especially when you’re considering taking action. I’ve found it helpful to find out what’s going on before trying to make changes.
I tried to follow this, though I’m not sure I did it in quite the way you meant, and I realized something potentially useful, then immediately—after staying focused on the introspection task for quite some time—wound up wandering off to think about Harry Potter and other things not at all useful to solving the problem. I can only assume my brain decided that the apiphony was sufficient and we were free to cool down.
Anyway, this does seem like a useful direction for now, so thanks!
I’m glad my suggestion helped.
I’m not sure what you thought I meant, but there might be an interesting difference between finding out what’s going on at the moment vs. finding out what one’s habits are—I’ve had exploration work out both ways.
What’s the difference between a simulation and a fiction?
I posted this in the previous open thread, and would like to carry on the discussion into this thread. As before, I regard this entire subject as a memetic hazard, and will rot13 accordingly. Also, if you’re going to downvote it, at least tell me why; karma means nothing to me, even in increments of 5, but it makes others less likely to respond.
Jung qbrf rirelbar guvax bs Bcra Vaqvivqhnyvfz, rkcynvarq ol Rqjneq Zvyyre nf gur pbaprcg juvpu cbfvgf:
Gur pbaprcg vf rkcynvarq nf n pbagenfg sebz gur pbairagvbany ivrj bs Pybfrq Vaqvivqhnyvfz, va juvpu gurer ner znal crefbaf naq gur Ohqquvfg-yvxr ivrj bs Rzcgl Vaqvivqhnyvfz, va juvpu gurer ner ab crefbaf.
V nfxrq vs gurer jrer nal nethzragf sbe Bcra Vaqvivqhnyvfz, be whfg nethzragf ntnvafg Pybfrq naq Rzcgl Vaqvivqhnyvfz gung yrnir BV nf gur bayl nygreangvir. Vpbcb Irggbev rkcynvarq vg yvxr guvf:
Gur rrevrfg cneg nobhg gur Snprobbx tebhc “V Nz Lbh: Qvfphffvbaf va Bcra Vaqvivqhnyvfz” vf gung gur crbcyr va gung tebhc gerng gur pbaprcg bs gurer orvat bayl bar crefba gur fnzr jnl gung Puevfgvnaf gerng gur pbaprcg bs n Tbq gung jvyy qnza gurve ybirq barf gb Uryy sbe abg oryvrivat va Uvz. Vg’f nf vs ab bar va gur tebhc ernyvmrf gur frpbaq yriry vzcyvpngvbaf bs gurer abg orvat nalbar ryfr, be znlor gurl qba’g rira pner.
Another day, another (controversial) opinion!
http://protokol2020.wordpress.com/2013/07/17/is-p-np/
I think this misunderstands the state of modern complexity theory.
There are lots of NP-complete problems that are well known to have highly accurate approximations that can be computed efficiently. The knapsack problem and traveling-salesperson in 2D Euclidean space are both examples of this. Unfortunately, having an epsilon-close approximation for one NP-complete problem doesn’t necessarily help you on other NP-complete problems.
There’s nothing particularly magic about evolutionary algorithms here. Any sensible local search will often work well on instances of NP-complete problems.
Oh, yes! The evolutionary algorithm is not the only way, and certainly not the magic way. It’s just an example how to sometimes cheat and get a good result for a NP problem. It’s the best cheater I know, but likely not the only one.
Sometimes we can guess the answer. Sometimes we can role the Monte Carlo. Simulated annealing is another way to “steal the NP gods fire”. And sometimes the error of our approximation might even be zero!
But the main point of my article is this innovating aspect of the Evolutionary Algorithm. Unforeseen solutions delivered, as this 249 circles in square is one of many. Then humans are those who do the fine tuning on the base of this EA solutions. They do the routine job of refining, after the EA has made the fundamental innovation.
I wouldn’t mind normally. There is no thin red line between the improvement and the innovation, people just imagine it. But since they do, here they can see how the humans and computers are both on the wrong sides of their discrimination line. As they have swapped their “natural places”.
Unless you meant to imply a specific problem (and very probably even then), evolutionary algorithms are actually pretty stupid. I’ll even go on a limb and claim that the evolutionary algorithm is the smartest of the stupid algorithms, where “stupid” means approximately “I understand nothing about the problem except I can tell if some solutions are better than others, if I’m given examples”.
Of course, if the problem is complicated enough that might be the best we can do.
I’m not sure what you mean by “innovating”. A solution I receive from any algorithm that searches (rather than verifies) solutions will usually be unforseen. (If I foresaw it, I wouldn’t need to search for it, I’d just test it.)
They hold some world records on the density of packing.
If you held just one, would you call yourself stupid?
I guess not.
Why not? If I used an EA to get it, that basically means “I don’t know how to solve the problem, so I’ll just use the best method I know of for trying random solutions”.
Also, I’m the world record holder at looking like myself, that doesn’t mean that I’m smarter than anyone else, particularly in the sense of knowing how to build a person that looks like myself.
If your random guessing will provide a (previously unknown) solution, very well. But it probably won’t.
I am not talking about “maybe it would”, I talk about “it did, indeed”.
Everybody has few such records, but those are worthless. Some people however, solved a difficult puzzle. On this particular site I gave you a link, such a competition is going on. Maybe it reminds someone to the LW’s PD agents competition?
Anyway, I don’t hold any record there. An algorithm I designed and called it Pack’n’tile holds some. Follow the links, download the program and try it, if you want.
Sorry, I was unclear. By “best method I know for trying random solutions” I meant evolutionary algorithms. (Which I think of as “guess randomly, then mix guesses randomly, then mutate guesses randomly, then select randomly biased towards the the best you found, then repeat from step two”. Of course, there’s a bit of smartness needed when applying the randomness, but still.)
I think we’re having mostly a terminology disagreement. I tend to think of EA as “finding a solution” rather than “solving the problem”, which I agree is not the most logical and precise use of language.
On another subject, I fear I may have offended you. If so, I apologize, and kudos for keeping calm enough to make it hard for me to be sure :)
I specifically said that the algorithms are stupid. That wasn’t meant to disparage anyone that uses them. I well know that it’s not at all trivial to write such an algorithm, and that there are good and bad ways of doing it, and that one can put a lot of cleverness in one. The authors of an algorithms that “won” a record in an important problem are very probably very smart people. But the algorithm itself may still be stupid, in the sense that it’s closer to brute force than actually finding the solution with a minimum of (computing) effort.
Technically speaking EA is stupid in the sense it’s very brief. The actual implementation is another matter.
But what is important in this context is the following: The algorithm’s results in this matching context are quite sloppy in the sense, that the squares don’t even touch each other to gain some more space. Still,the whole circle setting is so clever, that it can afford this generosity and still wins! After then, some humans often polish the evolved solution and claim the victory. What’s perfectly fine, the whole log exists.
Just curious, as I’m not familiar with that particular problem: are any of those records on “density of packing per FLOP”, or just “density of packing”?
What is that all about, you can best see here
OK, thanks, I’ll look if I have time, that’s a bit too much info to go through right now.
Confuses analytical best solution (what P=NP would be) with numerical good-enough solution (what evolution approximates just well enough to get advantage).
Exactly! Approximate. ~=.
Yes, but that doesn’t constitute “solving” NP in P except in having to work out a different approximation method in every instance of an NP problem.
Keep buggering.
http://protokol2020.wordpress.com/2013/07/21/the-tesla-myth/
I personally regard this entire subject as an example of a harmful meme, and will rot13 accordingly.
Jung qbrf rirelbar guvax bs Bcra Vaqvivqhnyvfz, rkcynvarq ol Rqjneq Zvyyre nf gur pbaprcg juvpu cbfvgf:
Gur pbaprcg vf rkcynvarq nf n pbagenfg sebz gur pbairagvbany ivrj bs Pybfrq Vaqvivqhnyvfz, va juvpu gurer ner znal crefbaf naq gur Ohqquvfg-yvxr ivrj bs Rzcgl Vaqvivqhnyvfz, va juvpu gurer ner ab crefbaf.
V nfxrq vs gurer jrer nal nethzragf sbe Bcra Vaqvivqhnyvfz, be whfg nethzragf ntnvafg Pybfrq naq Rzcgl Vaqvivqhnyvfz gung yrnir BV nf gur bayl nygreangvir. Vpbcb Irggbev rkcynvarq vg yvxr guvf:
Gur rrevrfg cneg nobhg gur Snprobbx tebhc “V Nz Lbh: Qvfphffvbaf va Bcra Vaqvivqhnyvfz” vf gung gur crbcyr va gung tebhc gerng gur pbaprcg bs gurer orvat bayl bar crefba gur fnzr jnl gung Puevfgvnaf gerng gur pbaprcg bs n Tbq gung jvyy qnza gurve ybirq barf gb Uryy sbe abg oryvrivat va Uvz. Vg’f nf vs ab bar va gur tebhc ernyvmrf gur frpbaq yriry vzcyvpngvbaf bs gurer abg orvat nalbar ryfr, be znlor gurl qba’g rira pner.
I’ve already posted this to the previous discussion thread, and despite losing a good chunk of my karma, I don’t feel entirely satisfied with the answers I received. Here’s a link to that post if you’re interested
Insanity is doing the same thing over and over again and expecting different results.
I have never seen that phrase applied to a scenario that had changing conditions.
Where are these downvotes coming from?
Me for one.
You’re being ridiculous when you rot13 this, it’s not actually important or interesting. It’s just a restatement of solipsism in bigger words. Reposting it when people thought originally it was worth down voting makes it even more worthy of downvotes.
Brilliant, fantastic. I’ll be incredibly happy if anyone can link me to a counter argument, because this has been weighing rather heavily on me. Why else would I rot13 this?
counter-argument to WHAT? As with solipsism, this doesn’t seem to anticipate any different experiences. This means that literally every piece of evidence is neutral between the two world-views. Either the universe is the universe and not in your head or it’s in your head and seems like a universe. Either way you can reliably get results from interacting it in certain ways. If the entire universe is a simulation in your brain it’s exactly as complicated and you have as much control over it as if it wasn’t. Why do you care?
To put it another way, there’s no use considering the theory that a malevolent demon is in control of all the information you receive. Everything you know is because it wants you to know it. You can never prove one way or the other if it exists, so it may as well NOT exist.
Open Individualism.
Someone else already went that route, and I explained why Open Individualism wasn’t like solipsism. The discussion ended after my response.
That difference has no effect on your anticipated experiences.
Not that I’m advocating the existence of zombies, but technically neither does having a zombie for a boyfriend. Eliezer Yudkowsky didn’t knock-down the zombie possibility by talking about anticipated experiences, he knocked it down by explaining the logical impossibility,
I don’t understand how saying that will make the concept go away.
The logical impossibility depended on how you couldn’t have conversations about consciousness without it. If you’re the only thing that exists how can I disagree with you about it? How did you learn about it from a philosopher?
Had I read the argument from someone else at an earlier date, I’d probably use an argument-from-difference like you are. Such a scenario is more than logically possible- I might have actually considered the problem in the past. I have no doubt that the person who had once disagreed with OI is me.
Do you want to take this to PM, if only to save on your karma?
Nah I’ve pretty much lost interest. Sorry. I don’t particularly care about my karma except as evidence on average. The troll toll doesn’t bother me. This issue is a lot less important to me because I’m not coming from a position of believing I’m the most important thing.
Neither do I.
In that case, can you direct me to any relevant resources?