Our Phyg Is Not Exclusive Enough
EDIT: Thanks to people not wanting certain words google-associated with LW: Phyg
Lesswrong has the best signal/noise ratio I know of. This is great. This is why I come here. It’s nice to talk about interesting rationality-related topics without people going off the rails about politics/fail philosophy/fail ethics/definitions/etc. This seems to be possible because a good number of us have read the lesswrong material (sequences, etc) which innoculate us against that kind of noise.
Of course Lesswrong is not perfect; there is still noise. Interestingly, most of it is from people who have not read some sequence and thereby make the default mistakes or don’t address the community’s best understanding of the topic. We are pretty good about downvoting and/or correcting posts that fail at the core sequences, which is good. However, there are other sequences, too, many of them critically important to not failing at metaethics/thinking about AI/etc.
I’m sure you can think of some examples of what I mean. People saying things that you thought were utterly dissolved in some post or sequence, but they don’t address that, and no one really calls them out. I could dig up a bunch of quotes but I don’t want to single anyone out or make this about any particular point, so I’m leaving it up to your imagination/memory.
It’s actually kindof frustrating seeing people make these mistakes. You could say that if I think someone needs to be told about the existence of some sequence they should have read before posting, I ought to tell them, but that’s actually not what I want to do with my time here. I want to spend my time reading and participating in informed discussion. A lot of us do end up engaging mistaken posts, but that lowers the quality of discussion here because so much time and space has been spent battling ignorance instead of advancing knowledge and dicussing real problems.
It’s worse than just “oh here’s some more junk I have to ignore or downvote”, because the path of least resistance ends up being “ignore any discussion that contains contradictions of the lesswrong scriptures”, which is obviously bad. There are people who have read the sequences and know the state of the arguments and still have some intelligent critique, but it’s quite hard to tell the difference between that and someone explaining for the millionth time the problem with “but won’t the AI know what’s right better than humans?”. So I just ignore it all and miss a lot of good stuff.
Right now, the only stuff I can be resonably guaranteed is intelligent, informed, and interesting is the promoted posts. Everything else is a minefield. I’d like there to be something similar for discussion/comments. Some way of knowing “these people I’m talking to know what they are talking about” without having to dig around in their user history or whatever. I’m not proposing a particular solution here, just saying I’d like there to be more high quality discussion between more properly sequenced LWers.
There is a lot of worry on this site about whether we are too exclusive or too phygish or too harsh in our expectation that people be well-read, which I think is misplaced. It is important that modern rationality have a welcoming public face and somewhere that people can discuss without having read three years worth of daily blog posts, but at the same time I find myself looking at the moderation policy of the old sl4 mailing list and thinking “damn, I wish we were more like that”. A hard-ass moderator righteously wielding the banhammer against cruft is a good thing and I enjoy it where I find it. Perhaps these things (the public face and the exclusive discussion) should be separated?
I’ve recently seen someone saying that no-one complains about the signal/noise ratio on LW, and therefore we should relax a bit. I’ve also seen a good deal of complaints about our phygish exclusivity, the politics ban, the “talk to me when you read the sequences” attitude, and so on. I’d just like to say that I like these things, and I am complaining about the signal/noise ratio on LW.
Lest anyone get the idea that no-one thinks LW should be more phygish or more exclusive, let me hereby register that I for one would like us to all enforce a little more strongly that people read the sequences and even agree with them in a horrifying manner. You don’t have to agree with me, but I’d just like to put out there as a matter of fact that there are some of us that would like a more exclusive LW.
- How I learned to stop worrying and love skill trees by 23 May 2023 4:08 UTC; 81 points) (
- How I learned to stop worrying and love skill trees by 23 May 2023 8:03 UTC; 22 points) (EA Forum;
- 3 Sep 2012 17:40 UTC; 20 points) 's comment on Open Thread, September 1-15, 2012 by (
- What can we learn from freemasonry? by 24 Nov 2013 15:18 UTC; 17 points) (
- 15 Apr 2012 19:04 UTC; 14 points) 's comment on Our Phyg Is Not Exclusive Enough by (
- 23 Nov 2015 15:28 UTC; 10 points) 's comment on Marketing Rationality by (
- 28 Apr 2012 18:34 UTC; 5 points) 's comment on Do people think Less Wrong rationality is parochial? by (
- 21 Jun 2012 17:37 UTC; 4 points) 's comment on The Power of Reinforcement by (
- 24 Apr 2012 2:38 UTC; 1 point) 's comment on How can we get more and better LW contrarians? by (
- 21 Jun 2012 19:14 UTC; -1 points) 's comment on The Power of Reinforcement by (
- 6 Jul 2012 5:42 UTC; -2 points) 's comment on Irrationality Game II by (
I’ve lurked here for over a year and just started posting in the fan fic threads a month ago. I have read a handful of posts from the sequences and I believe that some of those are changing my life. Sometimes when I start a sequence post I find it uninteresting and I stop. Posts early in the recommended order do this, and that gets in the way every time I try to go through in order. I just can’t be bothered because I’m here for leisure and reading uninteresting things isn’t leisurely.
I am noise and I am part of the doom of your community. You have my sympathy, and also my unsolicited commentary:
Presently your community is doomed because you don’t filter.
Noise will keep increasing until the community you value splinters, scatters, or relocates itself as a whole. A different community will replace it, resembling the community you value just enough to mock you.
If you intentionally segregate based on qualifications your community is doomed anyway.
The qualified will stop contributing to the unqualified sectors, will stop commending potential qualifiers as they approach qualification, and will stop driving out never qualifiers with disapproval. Noise will win as soon as something drives a surge of new interest and the freshest of the freshmen overwhelm the unqualified but initiated.
Within the fortress of qualification things will be okay. They might never feel as good as you think you remember, but when you look through that same lens from further ahead you might recognize a second Golden Age of Whatever. Over time less new blood will be introduced, especially after the shanty town outside the fortress burns to the ground a couple times. People will leave for the reasons people leave. The people left will become more insular and self referential. That will further drive down new blood intake.
Doomed.
What are you going to do about it?
The best steps to take to sustain the community you value in this instance may be different than the best steps to take to build a better instance of the community.
I suspect communities have a natural life cycle and most are doomed. Either they change unrecognisably or they die. This is because the community members themselves change with time and change what they want, and what they want and will put up with from newbies, and so on. (I don’t have a fully worked-out theory yet, but I can see the shape of it in my head. I’d be amazed if someone hasn’t written it up.)
What this theory suggests: if the forum has a purpose beyond just existence (as this one does), then it needs to reproduce. The Center for Modern Rationality is just the start. Lots of people starting a rationality blog might help, for example. Other ideas?
This is a good idea if and only if we can avoid summoning Azartoth.
You seem to be implying here that LW’s purpose is best achieved by some forum continuing to exist in LW’s current form.
Yes?
If so, can you expand on your reasons for believing that?
No, that would hold only if one thinks a forum is the best vehicle. It may not even be a suitable one. My if-then does assume a further “if” that a forum is, at the least, an effective vehicle.
(nods) OK, cool.
My working theory is that the original purpose of the OB blog posts that later became LW was to motivate Eliezer to write down a bunch of his ideas (aka “the Sequences”) and get people to read them. LW continues to have remnants of that purpose, but less and less so with every passing generation.
Meanwhile, that original purpose has been transferred to the process of writing the book I’m told EY is working on. I’m not sure creating new online discussion forums solves a problem anyone has.
As that purpose gradually becomes attenuated beyond recognition, I expect that the LW forum itself will continue to exist, becoming to a greater and greater extent a site for discussion of HP:MoR, philosophy, cognition, self-help tips, and stuff its users think is cool that they can somehow label “rational.” A small group of SI folks will continue to perform desultory maintenance, and perhaps even post on occasion. A small group of users will continue to discuss decision theory here, growing increasingly isolated from the community.
If/when EY gets HP:MoR nominated for a Hugo award, a huge wave of new users will appear, largely representative of science-fiction fandom. The proportion of LW devoted to HP:MoR discussion will double, as will the frequency of public handwringing about the state of the site. The level of discussion will rapidly plummet to standard Internet Geek. LW maintenance will be increasingly seen as a chore by SI folks, to be assigned to interns with nothing better to do. The more academic types will eventually decide it’s worth their while to create a new venue for their discussions.
If the book is published and achieves any degree of popularity, there will be a wave of new people joining to talk about the book. This will have all kinds of consequences, but one of them will be a huge increase in the 101-level discussions on LW, as high-school students all over the country decide to share their insights about reality and truth and rationality and philosophy, reminiscent of alt.fan.hofstadter Back in the Day.
And, more precisely
Like NaNoWriMo or thirty things in thirty days (which EY indirectly inspired) - giving the muse an office job. Except, of course, being Eliezer, he made it one a day for two years.
I’m responding to congratulate you on your correct prediction.
I see this account hasn’t been active in over four years.
If anyone does feel motivated to post just bare links to sequence posts, hit one of the Harry Potter threads. These seem to be attracting LW n00bs, some of whom seem actually pretty smart—i.e., the story is working to its intended purpose.
I can understand people wanting that. If the goal is to spread this information, however, I’d suggest that those wanting to be part of an Inner Circle should go Darknet, invitation only, and keep these discussions there, if you must have them at all.
As someone who has been around here maybe six months and comes everyday, I have yet to drink enough kool aid not to find ridiculous elements to this discussion.
“We are not a Phyg! We are not a Phyg! How dare you use that word?” Could anything possibly make you look more like a Phyg than tabooing the word, and karmabombing people who just mention it? Well, the demand that anyone who shows up should read a million words in blog posts by one individual, and agree with most all of it before speaking does give “We are not a Phyg!” a run for it’s money.
Take a step back, and imagine yourself at a new site that had some interesting material, and then coming on a discussion like this. Just what kind of impression would it give you?
Of course, if you just want to talk to the people who you consider have interesting things to say, that’s fine and understandable. In fact, I think this discussion serves your purpose well, because it will chase away new folks, and discourage those who haven’t been here long from saying much.
Given the current list software, sharing that infrastructure between who want a pure playground and those who want new playmates creates an inevitable conflict. It is possible to have list filtering that is more fine grained, and offers more user control, that mitigates much of the problem. That would be a better solution than a Darknet, but it’s work.
I’m amused by the framing as a hypothetical. I’m far from being an old-timer, but I’ve been around for a while, and when I was new to this site a discussion like this was going on. I suspect the same is true for many of us. This particular discussion comes around on the gittar like clockwork.
What impression did it leave you?
In my case it left the impression that (a) this was an Internet forum like any other I’ve been on in the past seventeen years (b) like all of them, it behaved as though its problems were unique and special, rather than a completely generic phenomenon. So, pretty much as normal then.
BTW, to read the sequences is not to agree with every word of them, and when I read all the rest of the posts chronologically from 2009-2011 the main thing I got from it was the social lay of the land.
(My sociology is strictly amateur, though an ongoing personal interest.)
This is hardly my first rodeo, but this place is unlike any others I’ve been on for exactly the point at issues here—the existence of a huge corpus written overwhelmingly by one list member that people are expected to read before posting and relate their posts to. The closest I’ve come to such attitudes were on two lists; one Objectivist, one Anarchist.
On the Objectivist list, where there was a little bit of “that was all answered in this book/lecture from Rand”, people were not at all expected to have read the entire corpus before participating. Rand herself was not participating on the list, so there is another difference.
The Anarchist list was basically the list of an internet personality who was making a commercial venture of it, so he controlled the terms of the debate as suited his purposes, and tabooed issues he considered settled. Once that was clear to me, I left the site, considering it too phygish.
I’d imagine that there are numerous religious sites with the same kind of reading/relating requirements, but only a limited number of those where the author of the corpus was a member of the list.
To LW’s credit, “read the sequences” as a counterargument seems increasingly rare these days. I’ve seen it once in the last week or two, but considering that we’re now dealing with an unusually large number of what I’ll politely describe as contrarian newcomers, I’ll still count that as a win.
In any case, I don’t get the sense that this is an unknown issue. Calls for good introductory material come up fairly often, so clearly someone out there wants a better alternative to pointing newcomers at a half-million words of highly variable material and hoping for the best—but even if successful, I suspect that’ll be of limited value. The length of the corpus might contribute to accusations of phygism, but it’s not what worries me about LW. Neither is the norm of relating posts to the Sequences.
This does give me pause, though: LW deals politely with intelligent criticism, but it rarely internalizes it. To the best of my recollection none of the major points of the Sequences have been repudiated, although in a work of that length we should expect some to have turned out to be demonstrably wrong; no one bats a thousand. A few seem to have slipped out of the de-facto canon—the metaethics sequence comes to mind, as does most of Fun Theory—but that doesn’t seem to be so much a response to criticism as to simple lack of interest or lesser development of ideas. It’s not a closed system, not quite, but its visa regulations seem strict, and naturalization difficult.
What can we do about this?
Reply not with “read the sequences”, but with “This is covered in [link to post], which is part of [link to sequence].” ? Use one of the n00b-infested Harry Potter threads, with plenty of wrong but not hopeless reasoning, as target practice.
I think that you’ve got a bigger problem than internalizing repudiations. The demand for repudiations is the mistake Critical Rationalists make—“show me where I’m wrong” is not a sufficiently open mind.
First, the problem might be that you’re not even wrong. You can’t refute something that’s not even wrong. When someone is not even wrong, he has to be willing to justify his ideas, or you can’t make progress. You can lead a horse to water, but you can’t make him think.
(As an aside, is there an article about Not Even Wrong here? I don’t remember one, and it is an important idea to which a lot are probably already familiar. Goes well with the list name, too.)
Second, if one is only open to repudiations, one is not open to fundamentally different conceptualizations on the issue. The mapping from one conceptualization to another can be a tedious and unproductive exercise, if even possible in practical terms.
I’ve spent years on a mailing list about Stirner—likely The mailing list on Stirner. In my opinion, Stirner has the best take on metaethics, and even if you don’t agree, there are a number of issues he brings up better than others. A lot of smart folks on that list, and we made some limited original progress.
Stirner is near the top of the list for things I know better than others. People who would know better, are likely people I already know in a limited fashion. I thought to write an article from that perspective, contrasting that with points in the Metaethics sequence. But I don’t think the argument in the Metaethics sequence really follows, and contemplating an exegesis of it to “repudiate” it fills me with a vast ennui. So, it’s Bah Humbug, and I don’t contribute.
Whatever you might think of me, setting up impediments to people sharing what they know best is probably not in the interest of the list. There’s enough natural impediment to posting an article in a group; always easier to snipe at others than put your own ideas up for target practice. There’s risk in that. And given the prevalence of akrasia here, do we need additional impediments?
One thing that I think would be helpful to all concerned is a weighted rating of the sequence articles, weighted by some function of karma, perhaps. If some sequences have fallen out of canon, or never were in canon, it would be nice to know. Just how much support any particular article has would be useful information.
Not that I know of, although it’s referenced all over the place—like Paul Graham’s paper on identity, it seems to be an external part of the LW canon. The Wikipedia page on “Not Even Wrong” does appear in XiXiDu’s list of external resources—a post that’s faded into undeserved obscurity, I think.
As to your broader point, I agree that “show me where I’m wrong” is suboptimal with regard to establishing a genuinely open system of ideas. It’s also a good first step, though, and so I’d view a failure to internalize repudiation as a red flag of the same species as what you seem to be pointing to—a bigger one, in fact. Not sufficient, but necessary.
Certainly if you have been repudiated, but fail to internalize the repudiation, you’ve got a big red flag. But that’s why I think’s it less dangerous and debilitating—it’s clear, obvious, and visible.
I consider only listening to repudiations as the bigger problem: it is being willfully deaf and non responsive to potential improvement. It’s not failing to understand, it’s refusing to listen.
In that case, Lukeprog’s metaethics sequence must have been of great comfort to you, since he didn’t really spend much time on Eliezer’s metaethics sequence. Perhaps you could just start covering Stimer’s material in a discussion post or two and see what happens.
Just curious, was the anarchist Fgrsna Zbylarhk?
DIng! Ding! Ding! We have a winner!
Yeah, that’s the one. I don’t begrudge a guy trying to make a buck, or wanting to push his agenda. I find him a bright guy with a lot of interesting things to say. And I’ll still listen to his youtube videos. But his agenda conflicts with mine, and I don’t want to spend energy discussing issues in a community where one isn’t allowed to publicly argue against some dogma in philosophy. That which can be destroyed by the truth should be.
Oooh, what’s my prize?
Yup, I pretty much agree with your assessment. It was quite the interesting rabbit hole to go down. But at least for me, it became anti-productive and unhealthy. I found much better uses of my time.
Downvoted for unnecessary rot13.
buybuydandavis may not want his username Google’ably related to someone running an organization he considers phygish. Especially if it is a phyg, in which case there’s a non-negligible probability of personal risk by openly using the name. If that’s the case, then it makes sense that he didn’t use the anarchist’s name in his post while he was fine with using Rand’s. Rot13 helps reduce the chance of risk to (what I see as) negligible levels. If this is the case, then I see the use of rot13 as appropriate.
Admittedly, he could have has a slew of other reasons for not using the name. He could just not want to give the guy the traffic. Or maybe he’s not as well known as Rand, and using the name wouldn’t be edifying to most readers. Or maybe it wasn’t a conscious decision at all. If any of these kind of reasons are the case, then the rot13 was unnecessary.
So I had to make a quick judgment under uncertainty, so I decided to use rot13 because and err on the side of caution. If I was going to make a mistake, I wanted to ensure I’d make the least costly mistake. From my perspective, the extra few seconds per person reading the comment is worth that assurance.
If there’s a better standard/heuristic by which to use rot13, or if I have broken any of LW’s rules by using it the way I did, please let me know. I’d be happy to correct my behavior. Otherwise, I’ve explained my reasoning and will politely tap out of the conversation. Other commenters can feel free to up/down vote as they feel appropriate.
Edit: Regarding this, buybuydandavis later commented that
I intentionally didn’t use his name because he wasn’t the point, he was an example for a point. And I didn’t wish to publicly criticize him, or introduce a name that many wouldn’t be familiar with.
Perhaps rot13 wasn’t necessary by some written/unwritten rules, but I thought the use was appropriate in response to my deliberately leaving his name out of it; I found it both perceptive and courteous.
That’s a pretty silly concern. There’s 738,000 Google hits for Stefan Molyneux, this thread is never going to crack the top thousand. It won’t crack the top million for “cult”, rot13 or no. And even if it does, so what? Oh no, a discussion board mentions cults in passing—that’s never happened before! If someone is searching specifically for this thread then they’ll find it but that’s probably a good thing—who is going to be looking for a reason other than a followup argument referencing it? I’m all for Google paranoia, but there are limits in all things.
Rot13 is active interference in the conversation—I’m fine with that if there’s reason, like avoiding spoilers, but if there’s no reason, then you’re being silly and wasting my time. “Phyg” is annoying and poserish, and “Fgrsna Zbylarhk” is just obfuscating information that you’re intentionally trying to pass along in an offhand comment(i.e., one that you’re supposed to be able to process quickly).
And yes, I knew when I made that post above that I’d be giving him +rep far outweighing my −1, and probably giving myself just as much -rep. Oh well. If karma actually mattered, I might worry about it.
Interesting side note—I apparently live fairly near Molyneux, and got invited by a mutual acquaintance to an event where he will be speaking after I posted this. Suffice it to say, mockery of anarchism followed.
That’s an important difference, but I don’t think it’s one for the social issues being raised in this post or this thread, which are issues of community interaction—and I think so because it’s the same issues covered in A Group Is Its Own Worst Enemy. This post is precisely the call for a wizard smackdown.
I was going to say essentially this, but the other David did it for me.
I’m sure. What I wonder is how much the sequences even represent a consensus of the original list members involved in the discussion. In my estimation, it varies a lot. In particular, I doubt EY carried the day with even a strong plurality with both his conclusions and argument in the metaethics sequence.
I doubt even Eliezer_2012 would agree with all of them. They were a rather rapidly produced bunch of blog posts and very few people would maintain consistent endorsement of past blogging output.
Hmm. I generally agree with the original post, but I don’t want to be part of an inner circle. I want access to a source of high insight-density information. Whether or not I myself am qualified to post there is an orthogonal issue.
Of course, such a thing would have an extremely high maintenance cost. I have little justification for asking to be given access to it at no personal cost.
Spreading information is important too, but only to the extent that what’s being spread is contributing to the collective knowledge.
Which is yet another purpose that involves tradeoffs with the ones I previously mentioned.
I’m puzzled why you think a private email list involves extremely high maintenance costs. Private google group?
A technological solution to the mass of the problem on this list wouldn’t seem that hard either. As I’ve pointed out in other threads, complex message filtering has been around at least since usenet. Much of the technical infrastructure must already be in place, since we have personally customizable filtering based on karma and Friends. Or add another Karma filter for total Karma for the poster, so that you don’t even have to enter Friends by hand. Combine Poster Karma with Post Karma with an inclusive OR, and you’ve probably gone 80% of the way there to being able to filter unwanted noise.
Not infrastructural costs. Social costs (and quite a bit of time, I expect). It takes effort to select contributors and moderate content, especially when those contributors might be smarter than you are. Distinguishing between correct contrarianism and craziness is a hard problem.
The difficulty is in working out who to filter. Dealing with overt trolling is easy. I change my opinions often enough over a long enough period of time that a source of ‘information that I agree with’ is nearly useless to me.
I think I get it. You want someone/something else to do the filtering for you?
That’s easy enough too. If others are willing, instead of being Friended, they could be FilterCloned, and you could filter based on their settings. Let EY be the DefaultFilterClone, or let him and his buddies in the Star Chamber set up a DefaultFilterClone.
Not exactly ‘want’. The nature of insights is that they are unexpected. But essentially yes.
[meta] A simple reminder: This discussion has a high potential to cause people to embrace and double down on an identity as part of the inner or outer circles. Let’s try to combat that.
In line with the above, please be liberal with explanations as to why you think an opinion should be downvoted. Going through the thread and mass-downvoting every post you disagree with is not helpful. [/meta]
The post came across to me as an explicit call to such, which is rather stronger than “has a high potential”.
I agree. Low barriers to entry (and utterly generic discussions, like on which movies to watch) seem to have lowered the quality. I often find myself skimming discussions for names I recognize, and just read their comments—ironic, given that once upon a time the anti-kibitzer seemed pressing!
Lest this been seen as unwarranted arrogance: there are many values of p in [0,1] such that I would run a p risk of getting personally banned in return for removing the bottom p of the comments. I often write out a comment and delete it, because I think that, while above the standard of the adjacent comments, it is below what I think the minimal bar should be. Merely saying new, true things about the topic matter is not enough!
The Sequence Re-Runs seem to have had little participation, which is disappointing—I had great hope for those.
As someone who is rereading the sequences I think I have a data point as to why. First of all, the “one post a day” is very difficult for me to do. I don’t have time to digest a LW post every day, especially if I’ve got an exam coming up or something. Secondly, I joined the site after the effort started, so I would have had to catch up anyway. Thirdly, ideally I’d like to read at a faster average rate than one per day. But this hasn’t happened at all, my rate has actually been rather slower, which is kind of depressing.
I’ve actually been running a LW sequence liveblog, mostly for my own benefit during the digestive process. See here. I find myself wondering whether others will join me in the liveblogging business sooner or later. I find it a good way to enforce actually thinking about what I am reading.
What I did personally was read through them through relatively quickly. I might not have understood it at the same level of depth but if something is related to something in the sequences then I’ll know and know where I can find the information if there’s anything I’ve forgotten.
I read them, but engaging in discussion seems difficult. Am I just supposed to pretend all of the interesting comments below don’t exist and risk repeating something stupid on the Repeat post? Or should I be trying to get involved in a years-old discussion on the actual article? Sadly, this is something that has a sort of activation energy: if enough people were discussing the sequence repeats, I would discuss them too.
Perhaps we could save users one click by putting the summary of the article on the top of the main page with links “read the article” and “discuss the article” below. Sometimes saving users one click increases the traffic significantly.
Organizing reading the squence into classes of people (think Metaethics Class of 2012) that commit to reading them and debating them and then answer a quizz about seems more likely to get participation.
I still read them and usually remember to vote them up for MinibearRex bothering to post them, and comment if I have something to say.
Edit: Eliminated text to conform to silly new norm. Check out relevant image macro.
It’s whimsical, I like it. The purported SEO rationale behind it is completely laughable (really, folks? People are going to judge the degree of phyggishness of LW by googling LW and phyg together, and you’re going to stand up and fight that? That’s just insane), but it’s cute and harmless, so why not adopt it for a few days? Of all reasons to suspect LW of phyggish behavior, this has got to be the least important one. If using the word “phyg” clinches it for someone, I wouldn’t take them seriously.
To avoid guilt by association?
Beats me. And yet I find myself going along with the new norm, just like you.
One of us… One of us...
Well stop it. We should be able to just call a cult a cult.
Dur ? I think you might have quoted the wrong person in your comment above.
Edit: Retracting my comment now that the parent is fixed
Fixed. Stupid clipboard working differently on windows and linux.
Why in the name of the mighty Cthulhu should people on LW read the sequences? To avoid discussing the same things again and again, so that we can move to the next step. Minus the discussion about definitions of the word phyg, what exactly are talking about?
When a tree falls down in a LessWrong forest, why there is a “sound”:
Because people on LW are weird. Instead of discussing natural and sane topics, such as cute kittens, iPhone prices, politics, horoscopes, celebrities, sex, et cetera, they talk abour crazy stuff like thinking machines and microscopic particles. Someone should do them a favor, turn off their computers, and buy them a few beers, so that normal people can stop being afraid of them.
Because LW is trying to change the way people think, and that is scary. Things like that are OK only when the school system is doing it, because the school system is accepted by the majority. Books are usually also accepted, but only if you borrow them from a public library.
Because people on LW pretend they know some things better that everyone else, and that’s an open challenge that someone should go and kick their butts, preferably literally. Only strong or popular people are allowed to appear better. What’s worse, people on LW have the courage to disagree even with some popular people, and that’s pretty much insane.
When a tree falls down in a LessWrong forest, why there isn’t a “sound”:
There are no known examples of families broken when a family member refuses to submit to eternal knowledge of the Scriptures. (Unless such stories are censored here, of course.)
There are no known examples of violence or blackmail towards a former LW participant who decided to stop reading LW. (Unless such stories are censored here, of course.)
Minus the typical internet procrastination, there are no known examples of people who have lost years of their time and thousands of dollars, ruined their social and professional lives in their blind following of the empty promises LW gave them. (Unless such stories are censored here, of course.)
What next? Any other specific accusations? If no, why in the name of the mighty Cthulhu are we even worrying about the phyg-stuff? Just because someone may find throwing such accusations funny? Are we that prone to trolling?
Let’s talk about more fruitful topic, such as: “is there a way to make Sequences more accessible to a newcomer?”
No, that isn’t it. LW isn’t at all special in that respect—a huge number of specialized communities exist on the net which talk about “crazy stuff”, but no one suspects them of being phygs. Your self-deprecating description is a sort of applause lights for LW that’s not really warranted.
No, that isn’t it. Every self-help book (of which there’s a huge industry, and most of which are complete crap) is “trying to change the way people think”, and nobody sees that as weird. The Khan academy is challenging the school system, and nobody thinks they’re phyggish. Attempts to change the way people think are utterly commonplace, both small-scale and large-scale. And the part about books and public libraries is just weird (what?).
Unwarranted applause lights again. Everybody pretends they know some things better than everyone else. Certainly any community does that rallies around experts on some particular topic. With “preferably literally” you cross over into the whining victimhood territory.
The self-pandering here is particularly strong, almost middle-school grade stuff.
You’ve done a very poor job trying to explain why LW is accused of being phyggish.
This, on the other hand, is a great, very strong point that everyone who finds themselves wary of (perceived or actual) phyggishness on LW should remind themselves of. I’m thinking of myself in particular, and thank you for this strong reminder, so forcefully phrased. I have to be doing something wrong, since I frequently ponder about this or that comment on LW that seems to exemplify phyggish thinking to me, but I never counter to myself with something like what I just quoted.
It’s not the Googleability of “phyg”. One recent real-life example is a programmer who emailed me deeply concerned (because I wrote large chunks of the RW article on LW). They were seriously worried about LessWrong’s potential for decompartmentalising really bad ideas, given the strong local support for complete decompartmentalisation, by this detailed exploration of how to destroy semiconductor manufacture to head off the uFAI. I had to reassure them that Gwern really is not a crazy person and had no intention of sabotaging Intel worldwide, but was just exploring the consequences of local ideas. (I’m not sure this succeeded in reassuring them.)
But, y’know, if you don’t want people to worry you might go crazy-nerd dangerous, then not writing up plans for ideology-motivated terrorist assaults on the semiconductor industry strikes me as a good start.
Edit: Technically just sabotage, not “terrorism” per se. Not that that would assuage qualms non-negligibly.
On your last point, I have to cite our all-*cough*-wise Professor Quirrell
Nevermind that there were no actual plans for destroying fabs, and that the whole “terrorist plot” seems to be a collective hallucination.
Nevermind that the author in question has exhaustively argued that terrorism is ineffective.
Yeah, but he didn’t do it right there in that essay. And saying “AI is dangerous, stopping Moore’s Law might help, here’s how fragile semiconductor manufacture is, just saying” still read to someone (including several commenters on the post itself) as bloody obviously implying terrorism.
You’re pointing out it doesn’t technically say that, but multiple people coming to that essay have taken it that way. You can say “ha! They’re wrong”, but I nevertheless submit that if PR is a consideration, the essay strikes me as unlikely to be outweighed by using rot13 for SEO.
Yes, I accept that it’s a problem that everyone and their mother leapt to the false conclusion that he was advocating terrorism. I’m not saying anything like “Ha! They’re wrong!” I’m lamenting the lamentable state of affairs that led to so many people to jump to a false conclusion.
“Just saying” is really not a disclaimer at all. c.f. publishing lists of abortion doctors and saying you didn’t intend lunatics to kill them—if you say “we were just saying”, the courts say “no you really weren’t.”
We don’t have a demonstrated lunatic hazard on LW (though we have had unstable people severely traumatised by discussions and their implications, e.g. Roko’s Forbidden Thread), but “just saying” in this manner still brings past dangerous behaviour along these lines to mind; and, given that decompartmentalising toxic waste is a known nerd hazard, this may not even be an unreasonable worry.
As far as I can tell, “just saying” is a phrase you introduced to this conversation, and not one that appears anywhere in the original post or its comments. I don’t recall saying anything about disclaimers, either.
So what are you really trying to say here?
It’s a name for the style of argument: that it’s not advocating people do these things, it’s just saying that uFAI is a problem, slowing Moore’s Law might help and by the way here’s the vulnerabilities of Intel’s setup. Reasonable people assume that 2 and 2 can in fact be added to make 4, even if 4 is not mentioned in the original. This is a really simple and obvious point.
Note that I am not intending to claim that the implication was Gwern’s original intention (as I note way up there, I don’t think it is); I’m saying it’s a property of the text as rendered. And that me saying it’s a property of the text is supported by multiple people adding 2 and 2 for this result, even if arguably they’re adding 2 and 2 and getting 666.
It’s completely orthogonal to the point that I’m making.
If somebody reads something and comes to a strange conclusion, there’s got to be some sort of five-second level trigger that stops them and says, “Wait, is this really what they’re saying?” The responses to the essay made it evident that there’s a lot of people that failed to have that reaction in that case.
That point is completely independent from any aesthetic/ethical judgments regarding the essay itself. If you want to debate that, I suggest talking to the author, and not me.
I’d have wondered about it myself if I hadn’t had prior evidence that Gwern wasn’t a crazy person, so I’m not convinced that it’s as obviously surface-innocuous as you feel it is. Perhaps I’ve been biased by hearing crazy-nerd stories (and actually going looking for them, ’cos I find them interesting). And I do think the PR disaster potential was something I would class as obvious, even if terrorist threats from web forum postings are statistically bogeyman stories.
I suspect we’ve reached the talking past each other stage.
I understood “just saying” as a reference to the argument you imply here. That is, you are treating the object-level rejection of terrorism as definitive and rejecting the audience’s inference of endorsement of terrorism as a simple error, and DG is observing that treating the object-level rejection as definitive isn’t something you can take for granted.
Meaning does not excuse impact, and on some level you appear to still be making excuses. If you’re going to reason about impressions (I’m not saying that you should, it’s very easy to go too far in worrying about sounding respectable), you should probably fully compartmentalize (ha!) whether a conclusion a normal person might reach is false.
I’m not making excuses.
Talking about one aspect of a problem does not imply that other aspects of the problem are not important. But honestly, that debate is stale and appears to have had little impact on the author. So what’s the point in rehashing all of that?
I agree that it’s not fair to blame LW posters for the problem. However, I can’t think of any route to patching the problem that doesn’t involve either blaming LW posters, or doing nontrivial mind alterations on a majority of the general population.
Anyway, we shouldn’t make it too easy for people to get the false conlusion, and we should err on side of caution.
Having said this, I join your lamentations.
Nevermind the fact that LW actually believes that uFAI has infinitely negative utility and that FAI has infinitely positive utility (see arguments for why SIAI is the optimal charity). That people conclude that acts that most people would consider immoral are justified by this reasoning, well I don’t know where they got that from. Certainly not these pages.
Ordinarily, I would count on people’s unwillingness to act on any belief they hold that is too far outside the social norm. But that kind of thinking is irrational, and irrational restraint has a bad rep here (“shut up and calculate!”)
LW scares me. It’s straightforward to take the reasoning of LW and conclude that terrorism and murder are justified.
Is there any ideology or sect of which that could not be said? Let us recall the bloody Taoist and Buddhist rebellions or wars in East Asian history and endorsements of wars of conquest, if we shy away from Western examples.
Oh sure, there are plenty of other religions as dangerous as the SIAI. It’s just strange to see one growing here among highly intelligent people who spend a ton of time discussing the flaws in human reasoning that lead to exactly this kind of behavior.
However, there are ideologies that don’t contain shards of infinite utility, or that contain a precautionary principle that guards against shards of infinite utility that crop up. They’ll say things like “don’t trust your reasoning if it leads you to do awful things” (again, compare that to “shut up and calculate”). For example, political conservatism is based on a strong precautionary principle. It was developed in response to the horrors wrought by the French Revolution.
One of the big black marks on the SIAI/LW is the seldom discussed justification for murder and terrorism that is a straightforward result of extrapolating the locally accepted morality.
I don’t know how you could read LW and not realize that we certainly do accept precautionary principles (“running on corrupted hardware” has its own wiki entry), that we are deeply skeptical of very large quantities or infinities (witness not one but two posts on the perennial problem of Pascal’s mugging in the last week, neither of which says ‘you should just bite the bullet’!), and libertarianism is heavily overrepresented compared to the general population.
No, one of the ‘big black marks’ on any form of consequentialism or utilitarianism (as has been pointed out ad nauseam over the centuries) is that. There’s nothing particular to SIAI/LW there.
It’s true that lots of Utilitarianisms have corner cases where they support action that would normally considered awful. But most of them involve highly hypothetical scenarios that seldom happen, such as convicting an innocent man to please a mob.
The problem with LW/SIAI is that the moral monstrosities they support are much more actionable. Today, there are dozens of companies working on AI research. LW/SIAI believes that their work will be of infinite negative utility if they are successful before Eliezer invents FAI theory and he convinces them that he’s not a crackpot. The fate of not just human civilization, but all of galactic civilization is at stake.
So, if any of them looks likely to be successful, such as scheduling a press conference to announce a breakthrough, then it’s straightforward to see what SI/LW thinks you should do about that. Actually, given the utilities involved, a more proactive strategy may be justified, if you know what I mean.
I’m pretty sure this is going to evolve into an evil terrorist organization, and would have done so already if the population weren’t so nerdy and pacifistic to begin with.
And yes, there are the occasional bits of cautionary principles on LW. But they are contradicted and overwhelmed by “shut up and calculate”, which says trust your arithmetic utilitarian calculus and not your ugh fields.
I agree that it follows from (L1) the assumption of (effectively) infinite disutility from UFAI, that (L2) if we can prevent a not-guaranteed-to-be-friendly AGI from being built, we ought to. I agree that it follows from L2 that if (L3) our evolving into an evil terrorist organization minimizes the likelihood that not-guaranteed-to-be-friendly AGI is built, then (L4) we should evolve into an evil terrorist organization.
The question is whether we believe L3, and whether we ought to believe L3. Many of us don’t seem to believe this.
Do you believe it?
If so, why?
I don’t expect terrorism is an effective way to get utilitarian goals accomplished. Terrorist groups not only don’t tend to accomplish their goals; but also, in those cases where a terrorist group’s stated goal is achieved or becomes obsolete, they don’t dissolve and say “our work is done” — they change goals to stay in the terrorism business, because being part of a terrorist group is a strong social bond. IOW, terrorist groups exist not in order to effectively accomplish goals, but rather to accomplish their members’ psychological needs.
“although terrorist groups are more likely to succeed in coercing target countries into making territorial concessions than ideological concessions, groups that primarily attack civilian targets do not achieve their policy objectives, regardless of their nature.” — Max Abrahms, “Why Terrorism Does Not Work”
“The actual record of terrorist behavior does not conform to the strategic model’s premise that terrorists are rational actors primarily motivated to achieving political ends. The preponderance of empirical and theoretical evidence is that terrorists are rational people who use terrorism primarily to develop strong affective ties with fellow terrorists.” — Max Abrahms, “What Terrorists Really Want: Terrorist Motives and Counterterrorism Strategy”.
Moreover, terrorism is likely to be distinctly ineffective at preventing AI advances or uFAI launch, because these are easily done in secret. Anti-uFAI terrorism should be expected to be strictly less successful than, say, anti-animal-research or other anti-science terrorism: it won’t do anything but impose security costs on scientists, which in the case of AI can be accomplished much easier than in the case of biology or medicine because AI research can be done anywhere. (Oh, and create a PR problem for nonterrorists with similar policy goals.)
As such, L3 is false: terrorism predictably wouldn’t work.
Yeah. When I run into people like Jacob (or XiXi), all I can do is sigh and give up. Terrorism seems like a great idea… if you are an idiot who can’t spend a few hours reading up on the topic, or even just read the freaking essays I have spent scores of hours researching & writing on this very question discussing the empirical evidence.
Apparently they are just convinced that utilitarians must be stupid or ignorant. Well! I guess that settles everything.
There’s a pattern that shows up in some ethics discussions where it is argued that an action that you could actually go out and start doing (so no 3^^^3 dust specs or pushing fat people in front of runaway trains) that diverges from everyday social conventions is a good idea. I get the sense from some people that they feel obliged to either dismiss the idea by any means, or start doing the inconvenient but convincingly argued thing right away. And they seem to consider dismissing the idea with bad argumentation a lesser sin than conceding a point or suspending judgment and then continuing to not practice whatever the argument suggested. This shows up often in discussions of vegetarianism.
I got the idea that XiXiDu was going crazy because he didn’t see any options beyond dedicating his life to door-to-door singularity advocacy or finding the fatal flaw which proved once and for all that SI are a bunch of deluded charlatans, and he didn’t want to do the former just because a philosophical argument told him to and couldn’t quite manage the latter.
If this is an actual thing, people with this behavior pattern would probably freak out if presented with an argument for terrorism they weren’t able to dismiss as obviously flawed extremely quickly.
XiXi was around for a while before he began ‘freaking out’.
I think what Risto meant was “an argument for terrorism they weren’t able to (dismiss as obviously flawed extremely quickly)”, not “people with this behavior pattern would probably freak out (..) extremely quickly”.
How long it takes for the hypothetical behavior pattern to manifest is, I think, beside their point.
(nods) I do have some sympathy for how easy it is to go from “I endorse X based on Y, and you don’t believe Y” to “You reject X.” But yeah, when someone simply refuses to believe that I also endorse X despite rejecting Y, there’s not much else to say.
Yup, I agree with all of this.
I’m curious about jacoblyles’ beliefs on the matter, though.
More specifically, I’m trying to figure out whether they believe L3 is true, or believe that LW/SI believes L3 is true whether it is or not, or something else.
‘Pretty sure’, eh? Would you care to take a bet on this?
I’d be happy to go with a few sorts of bets, ranging from “an organization that used to be SIAI or CFAR is put on the ‘Individuals and Entities Designated by the State Department Under E.O. 13224’ or ‘US Department of State Terrorist Designation Lists’ within 30 years” to “>=2 people previously employed by SIAI or CFAR will be charged with conspiracy, premeditated murder, or attempted murder within 30 years” etc. I’d be happy to risk, on my part, amounts up to ~$1000, depending on what odds you give.
If you’re worried about counterparty risk, we can probably do this on LongBets (although since they require the money upfront I’d have to reduce my bet substantially).
Thanks for comments. What I wrote was exaggerated, written under strong emotions, when I realized that the whole phyg discussion does not make sense, because there is no real harm, only some people made nervous by some pattern matching. So I tried to list the patterns which match… and then those which don’t.
My assumption is that there are three factors which together make the bad impression; separately they are less harmful. Being only “weird” is pretty normal. Being “weird + thorough”, for example memorizing all Star Trek episodes, is more disturbing, but it only seems to harm the given individual. Majority will make fun of such individuals, they are seen as at the bottom of pecking order, and they kind of accept it.
The third factor is when someone refuses to accept the position at the bottom. It is the difference between saying “yeah, we read sci-fi about parallel universes, and we know it’s not real, ha-ha silly us” and saying “actually, our intepretation of quantum physics is right, and you are wrong, that’s the fact, no excuses”. This is the part that makes people angry. You are allowed to take the position of authority only if you are a socially accepted authority. (A university professor is allowed to speak about quantum physics in this manner, a CEO is allowed to speak about money this way, a football champion is allowed to speak about football this way, etc.) This is breaking a social rule, and it has consequences.
A self-help book is safe. A self-help organization, not so much. (I mean an organization of people trying to change themselves, such as Alcoholics Anonymous, not a self-help publishing/selling company.)
They are supplementing the school system, not criticizing it. The schools can safely ignore them. Khan Academy is admired by some people, but generally it remains at the bottom of the pecking order. This would change for example if they started openly criticizing the school system, and telling people to take their children away from schools.
Generally I think that when people talk about phygs, the reason is that their instinct is saying: “inside of your group, a strong subgroup is forming”. A survival reaction is to call attention of the remaining group members to destroy this subgroup together before it becomes strong enough. You can avoid this reaction if the subgroup signals weakness, or if it signals loyalty to the currect group leadership; in both cases, the subgroup does not threaten existing order.
Assuming this instinct is real, we can’t change it; we can just avoid triggering the reaction. How exactly? One way is to signal harmlessness; but this seems incompatible with our commitment to truth and the spirit of tsuyoku naritai. Other way is to fall below radar by using an obscure technical speach; but this seems incompatible with our goal of raising the sanity waterline (we must be comprehensive to public). Yet other way is to signal loyalty to the regime, such as Singularity Institute publishing in peer-reviewed journals. Even this is difficult, because irrationality is very popular, so by attacking irrationality we inevitable attack many popular things. We should choose our battles wisely. But this is the way I would prefer. Perhaps there is yet another way that I forgot.
If the phyg-meme gets really bad we can just rename the site “lessharmful.com″.
Seriously?
Which part of my comment are you incredulous about?
That nobody sees self-help books as weird or cultlike.
redacted
That is one of the central fallacies of LW. The Sequnces generally don’t settle issues in a step-by-step way. They are made up of postings, each of which is followed by a discussion often containing a lot of “I don’t see what you mean” and “I think that is wrong because”. The stepwise model may be attractive, but that doesn’t make it feasible. Science isn’t that linear, and most of the topics dealt with are philosophy...nuff said.
I think your post is troubling in a couple of ways.
First, I think you draw too much of a dichotomy between “read sequences” and “not read sequences”. I have no idea what the true percentage of active LW members is, but I suspect a number of people, particularly new members, are in the process of reading the sequences, like I am. And that’s a pretty large task—especially if you’re in school, trying to work a demanding job, etc. I don’t wish to speak for you, since you’re not clear on the matter, but are people in the process of reading the sequences noise? I’m only in QM, and certainly wasn’t there when I started posting, but I’ve gotten over 1000 karma (all of it on comments or discussion level posts). I’d like to think I’ve added something to the community.
Secondly, I feel like entrance barriers are pretty damn high already. I touched on this in my other comment, but I didn’t want to make all of these points in that thread, since they were off topic to the original. When I was a lurker, the biggest barrier to me saying hi was a tremendous fear of being downvoted. (A re-reading of this thread seems prudent in light of this discussion) I’d never been part of a forum with a karma system before, and I’d spent enough time on here to know that I really respected the opinions of most people on here. The idea of my ideas being rejected by a community that I’d come to respect was very stressful. I eventually got over it, and as I got more and more karma, it didn’t hurt so much when I lost a point. But being karmassassinated was enough to throw all of that into doubt again, since when I asked about it I was just downvoted and no one commented. (I’m sure it’s just no one happened to see it in recent comments, since it was deep in a thread.) I thought that it was very likely that I would leave the site after that, because it seemed to me that people simply didn’t care what I had to say—my comments for about two days were met with wild downvoting and almost no replies, except by one person. But I don’t think I am the only person that felt this way when ey joined LessWrong.
Edit: Hyperlink messed up.
Edit 2: It just now occurred to me to add this, and I’ve commented enough in this thread for one person, so I’m putting it here: I think all of the meetup posts are much more harmful to the signal to noise ratio than anything else. Unless you’re going to them, there’s no reason to be interested in them.
Get a few more (thousand?) karma and you may find getting karmassassinated doesn’t hurt much any more either. I get karmassassinated about once a fortnight (frequency memory subject to all sorts of salience biases and utterly unreliable—it happens quite a lot though) and it doesn’t bother me all that much.
These days I find that getting the last 50 comments downvoted is a lot less emotionally burdensome than getting just one comment that I actually personally value downvoted in the absence of any other comments. The former just means someone (or several someones) don’t like me. Who cares? Chances are they are not people I respect, given that I am a lot less likely to offend people when I respect them. On the other hand if most of my comments have been upvoted but one specific comment that I consider valuable gets multiple downvotes it indicates something of a judgement from the community and is really damn annoying. On the plus side it can be enough to make me lose interest in lesswrong for a few weeks and so gives me a massive productivity boost!
I believe you. That fear is a nuisance (to us if it keeps people silent and to those who are limited by it). If only we could give all lurkers rejection therapy to make them immune to this sort of thing!
I think if I were karmassassinated again I wouldn’t care nearly as much, because of how stupid I felt after the first time it happened. It was just so obvious that it was just some idiot, but I somehow convinced myself it wasn’t.
But that being said, one of the reasons it bothered me so much was that there were a number of posts that I was proud of that were downvoted—the guy who did it had sockpuppets, and it was more like my last 15-20 posts had each lost 5-10 karma. (This was also one of the reasons I wasn’t so sure it was karmassassination) Which put a number of posts I liked way below the visibility threshold. And it bothered me that if I linked to those comments later, people would just see a really low karma score and probably ignore it.
I think you can’t give more downvotes than your karma, so that person would need 5-10 sockpuppets with at least 15-20 (EDIT: actually 4-5) karma each. If someone is going to the trouble of doing that, it seems unlikely that they would just pick on you and nobody else (given that your writings don’t seem to be particularly extreme in some way). Has anyone else experience something similar?
Creating sockpuppets for downvoting is easy.
(kids, don’t try this at home).
Just find a Wikipedia article on a cognitive bias that we haven’t had a top-level post on yet. Then, make a post to main with the content of the Wikipedia article (restated) and references to the relevant literature (you probably can safely make up half of the references). It will probably get in the neighborhood of 50 upvotes, giving you 500 karma, which allows 2000 comment downvotes.
Even if those estimates are really high, that’s still a lot of power for little effort. And just repeat the process for 20 biases, and you’ve got 20 sockpuppets who can push a combined 20 downvotes on a large number of comments.
Of course, in the bargain Less Wrong is getting genuinely high-quality articles. Not necessarily a bug.
If restating Wikipedia is enough to make for a genuinely high-quality article, maybe we should have a bot that copy-pastes a relevant Wikipedia article into a top-level post every few days. (Based on a few minutes of research, it looks like this is legal if you link to the original article each time, but tell me if I’m wrong.)
Really, I think the main problem with this is that most of the work is identifying which ones are the ‘relevant’ articles.
I was implying a non-copy-paste solution. Still, interesting idea.
Yes; I didn’t mean to say you were implying a copy-paste solution. But if we’re speaking in the context of causing good articles to be posted and not in the context of thinking up hypothetical sock-puppeting strategies, whether it’s copy-pasted or restated shouldn’t matter unless the restatement is better-written than the original.
agreed
Modulo the fake references, of course.
of course
There’s not much reason to do something like this, when you can arbitrarily upvote your own comments with your sockpuppets and give yourself karma.
But then those comments / posts will be correctively downvoted, unless they’re high-quality. And you get a bunch more karma from a few posts than a few comments, so do both!
You can delete them afterwards, you keep karma from deleted posts.
Let’s keep giving the disgruntled script kiddies instructions! That’s bound to produce eudaimonia for all!
We found one of the sockpuppets, and he had one comment that added nothing that was at like 13 karma. It wasn’t downvoted until I was karmassassinated.
It’s some multiple of your karma, isn’t it? At least four, I think- thomblake would know.
Yes, 4x, last I checked.
I should note that I have never actually been in your shoes. I haven’t had any cases where there was unambiguous use of bulk sockpuppets. I’ve only been downvoted via breadth (up to 50 different comments from my recent history) and usually by only one person at a time (occasionally two or three but probably not two or three that go as far as 50 comments at the same time).
That would really mess with your mind if you were in a situation where you could not yet reliably model community preferences (and be personally confident in your model despite immediate evidence.)
Take it as a high compliment! Nobody has ever cared enough about me to make half a dozen new accounts. What did you do to deserve that?
It was this thread.
Basically it boiled down to this: I was suggesting that one reason some people might donate to more than one charity is that they’re risk averse and want to make sure they’re doing some good, instead of trying to help and unluckily choosing an unpredictably bad charity. It was admittedly a pretty pedantic point, but someone apparently didn’t like it.
That seems to be something I would agree with, with an explicit acknowledgement that it relies on a combination of risk aversion and non-consequentialist values.
It didn’t really help that I made my point very poorly.
Presumably also because people you respect are not very likely to express their annoyance through something as silly as karmassassination, right?
It’s great that you are reading the sequences. You are right it’s not as simple as read them → not noise, not read them → noise. You say you are up to QM, then I would expect you to not make the sort of mistakes that would come from not having read the core sequences. On the other hand, if you posted something about ethics or AI (I forget where the AI stuff is chronologically), I would expect you to make some common mistakes and be basically noise.
The high barrier to entry is a problem for new people joining, but I also want a more strictly informed crowd to talk to sometimes. I think having a lower barrier to entry overall, but at least somewhere where having read stuff is strictly expected would be best, but there are problems with that.
Don’t leave, keep reading. When you are done you will know what I’m getting at.
I think it’s close to the end, right before/after the fun theory sequence? I’ve read some of the later posts just from being linked to them, but I’m not sure.
And I quite intentionally avoid talking about things like AI, because I know you’re right. I’m not sure that necessarily holds for ethics, since ethics is a much more approachable problem from a layperson’s standpoint. I spent a three hour car drive for fun trying to answer the question “How would I go about making an AI” even though I know almost nothing about it. The best I could come up with was having some kind of program that created a sandbox and randomly generated pieces of code that would compile, and pitting them in some kind of bracket contest that would determine intelligence and/or friendliness. Thought I’d make a discussion post about it, but I figured it was too obvious to not have been thought of before.
Aside: That sockpuppetry seems to now be an accepted mode of social discourse on LessWrong strikes me as a far greater social problem than people not having read the Sequences. (“Not as bad as” is a fallacy, but that doesn’t mean both things aren’t bad.)
edit: and now I’m going to ask why this rated a downvote. What does the downvoter want less of?
edit 2: fair enough, “accepted” is wrong. I meant that it’s a thing that observably happens. I also specifically mean socking-up to mass-downvote someone, or to be a dick to people, not roleplay accounts like Clippy (though others find those problematic).
I think it was downvoted because sockpuppetry wasn’t really “accepted” by LW, it was just one guy.
Yeah, “accepted” is connotationally wrong—I mean it’s observed, and it’s hard to do much about it.
To what extent does anyone except EY have moderation control over LW?
There are several people capable of modifying or deleting posts and comments.
Ahem, on my side it was a case of bad pattern-matching. When I realized it, I deleted the reply I was writing here, and also removed the downvote.
Perhaps you should have explained further why do you think sockpuppetry is bad. My original guess was that you speak about people having multiple votes from multiple accounts (I was primed by other comments in this thread) and I habitually downvote most comments speaking about karma. But now it seems to me that you are concerned with other aspects, such as anonymity and role-playing. But this is only a guess, I can’t see it from your comment.
Yeah, bad explanation on my account. I’m not so concerned with roleplay accounts (e.g. Clippy), as with socking up to mass-downvote. (Getting initial karma is very easy.) Socking-up to be a dick to people also strikes me as problematic. I think I mean “observed” rather than “accepted”, which implies a social norm.
My $0.02 (apologies if it’s already been said; I haven’t read all the comments): wanting to do Internet-based outreach and get new people participating is kind of at odds with wanting to create an specialized advanced-topics forum where we’re not constantly rehashing introductory topics. They’re both fine goals, but trying to do both at once doesn’t work well.
LW as it is currently set up seems better optimized for outreach than for being an advanced-topics forum. At the same time, LW doesn’t want to devolve to the least common denominator of the Internet. This creates tension. I’m about .6 confident that tension is intentional.
Of course, nothing stops any of us from creating invitation-only fora to which only the folks whose contributions we enjoy are invited. To be honest, I’ve always assumed that there exist a variety of more LW-spinoff private forums where the folks who have more specialized/advanced groundings get to interact without being bothered by the rest of us.
Somewhat relatedly, one feature I miss from the bad old usenet days is kill files. I suspect that I would value LW more if I had the ability to conceal-by-default comments by certain users here. Concealing sufficiently downvoted comments is similar in principle, but not reliable in practice.
My LessWrong Power Reader has a feature that allows you to mark authors as liked/disliked, which helps to determine which comments are expanded vs collapsed. Right now the weights are set so that if you’ve disliked an author, then any comment written by him or her that has 0 points or less, along with any descendants of that comment, will be collapsed by default. Each comment in the collapsed thread still has a visible header with author and points and color-coding to help you determine whether you still want to check it out.
(blink)
You are my new favorite person.
I am, admittedly, fickle.
And for discussion and top-level posts, there is already the friends feature:
http://lesswrong.com/prefs/friends/
(You can also add someone as a friend from their user page.)
There is something that appeals to me about this “roll your own exclusive forum” approach.
I am ashamed to say that I had no idea about the Friends feature. Thanks !
You’re suggesting a strategy of tension?
Aw. And they didn’t invite nyan_sandwich. That’s so sad.
He or she should get together with other people who haven’t been invited to Even Less Wrong and form their own. Then one day they can get together with Even Less Wrong like some NFL/AFL merger, only with more power to save the world.
There would have to be a semaphore or something, somewhere. So these secret groups can let each other know they exist without tipping off the newbs.
There’s probably no need for the groups to signal each other’s existence.
When a new Secret Even Less Wrong is formed, members are previously formed Secret Even Less Wrongs who are still participating in Less Wrong are likely to receive secret invites to the new Secret Even Less Wrong.
Nyan_sandwich might set up his secret Google Group or whatever, invite the people he feels are worthy and willing to form the core of his own Secret Even Less Wrong, and receive in reply an invite to an existing Secret Even Less Wrong.
That might have already happened!
Nothing nearly that Macchiavelian, more of a strategy of homeostasis through dynamic equilibrium.
I have tried, and failed, to launch elitist spinof subcomunities like that multiple times.
To what do you attribute the failures?
Lack of interest, lack of exposure, lack of momentum.
LW’s period of fastest growth was due to Eliezer’s posts that were accessible and advanced (and entertaining, etc.) Encouraging other people to do work like that could be more promising than splitting the goals as you propose.
Let’s be explicit here—your suggestion is that people like me should not be here. I’m a lawyer, and my mathematics education ended at Intro to Statistics and Advanced Theoretical Calculus. I’m interested in the cognitive bias and empiricism stuff (raising the sanity line), not AI. I’ve read most of the core posts of LW, but haven’t gone through most of the sequences in any rigorous way (i.e. read them in order).
I agree that there seem to be a number of low quality posts in discussion recently (In particular, Rationally Irrational should not be in Main). But people willing to ignore the local social norms will ignore them however we choose to enforce them. By contrast, I’ve had several ideas for posts (in Discussion) that I don’t post, but I don’t think it meets the community’s expected quality standard.
Raising the standard for membership in the community will exclude me or people like me. That will improve the quality of technical discussion, at the cost of the “raising the sanity line” mission. That’s not what I want.
No martyrs allowed.
I don’t propose simply disallowing people who havn’t read everything from being taken seriously, if they don’t say anything stupid. It’s fine if you havn’t read the sequences and don’t care about AI or heavy philosophy stuff, I just don’t want to read dumb posts about those topics that come from someone having not read the stuff.
As a matter of fact, I was careful to not propose much of anything. Don’t confuse “here’s a problem that I would like solved” with “I endorse this stupid solution that you don’t like”.
Fair enough. But I think you threw a wide net over the problem. To the extend you are unhappy that noobs are “spouting garbage that’s been discussed to death” and aren’t being sufficiently punished for it, you could say that instead. If that’s not what you are concerned about, then I have failed to comprehend your message.
Exclusivity might solve the problem of noobs rehashing old topics from the beginning (and I certainly agree that needing to tell everyone that beliefs must make predictions about the future gets old very fast). But it would have multiple knock-on effects that you have not even acknowledged. My intuition is that evaporative cooling would be bad for this community, but your sense may differ.
I, for one, would like to see discussion of LW topics from the perspective of someone knowledgeable about the history of law; after all law is humanity’s main attempt to formalize morality, so I would expect some overlap with FAI.
I don’t mind people who haven’t read the sequences, as long as they don’t start spouting garbage that’s already been discussed to death and act all huffy when we tell them so; common failure modes are “Here’s an obvious solution to the whole FAI problem!”, “Morality all boils down to X”, and “You people are a cult, you need to listen to a brave outsider who’s willing to go against the herd like me”.
If you’re interested in concrete feedback, I found your engagement in discussions with hopeless cases a negative contribution, which is a consideration unrelated to the quality of your own contributions (including in those discussions). Basically, a violation of “Don’t feed the clueless (just downvote them)” (this post suggests widening the sense of “clueless”), which is one policy that could help with improving the signal/noise ratio. Perhaps this policy should be publicized more.
I support not feeding the clueless, but I would like to emphasize that that policy should not bleed into a lack of explaining downvotes of otherwise clueful people. There aren’t many things more aggravating than participating in a discussion where most of my comments get upvoted, but one gets downvoted and I never find out what the problem was—or seeing some comment I upvoted be at −2, and not knowing what I’m missing. So I’d like to ask everyone: if you downvote one comment for being wrong, but think the poster isn’t hopeless, please explain your downvote. It’s the only way to make the person stop being wrong.
Case in point: this discussion currently includes 30 comments, an argument with a certain Clueless, most of whose contributions are downvoted-to-hidden. That discussion shouldn’t have taken place, its existence is a Bad Thing. I just went through it and downvoted most of those who participated, except for the Clueless, who was already downvoted Sufficiently.
I expect a tradition of discouraging both sides of such discussions would significantly reduce their impact.
While I usually share a similar sentiment, upon consideration I disagree with your prediction when it comes to the example conversation in question.
People explaining things to the Clueless is useful. Both to the person doing the explaining and anyone curious enough to read along. This is conditional on the people in the interaction having the patience to try to decipher the nature of the inferential distance try to break down the ideas into effective explanations of the concepts—including links to relevant resources. (This precludes cases where the conversation degenerates into bickering and excessive expressions of frustration.)
Trying to explain what is usually simply assumed—to a listener who is at least willing to communicate in good faith—can be a valuable experience to the one doing the explaining. It can encourage the re-examination of cached thoughts and force the tracing of the ideas back to the reasoning from first principles that caused you to believe them in the first place.
There are many conversations where downvoting both sides of a discussion is advisable, yet it isn’t conversations with the “Clueless” that are the problem. It is conversations with Trolls, Dickheads and Debaters of Perfect Emptiness that need to go.
Startlingly, Googling “Debaters of Perfect Emptiness” turned up no hits. This is not the best of all possible worlds.
Think “Lawyer”, “Politician” or the bottom line.
Sorry, I wasn’t clear. I understood perfectly well what you meant by the phrase and was delighted by it. What I meant to convey was that I was saddened to discover that I lived in a universe where it was not a phrase in common usage, which it most certainly ought to be.
Oh, gotcha. I’m kind of surprised we don’t have a post on it yet. Lax of me!
I accept your criticism in the spirit it was intended—but I’m not sure you are stating a local consensus instead of your personal preference. Consider the recent exchange I was involved in. It doesn’t appear to me that the more wrong party has been downvoted to oblivion, and he should have been by your rule. (Specifically, the Main post has been downvoted, but not the comment discussion)
Philosophically, I think it is unfortunate that the people who believe that almost all terminal values are socially constructed are the some people who think empiricism is a useless project. I don’t agree with the later point (i.e. I think empiricism is the only true cause of human advancement), but the former point is powerful and has numerous relevant implications for Friendly AI and raising the sanity line generally. So when anti-empiricism social construction people show up, I try to persuade them that empiricism is worthwhile so that their other insights can benefit the community. Whether this persuasion is possible is a distinct question from whether the persuasion is a “good thing.”
Note that your example is not that pattern, and I haven’t responded to Clueless. C is anti-empiricism, but he hasn’t shown anything that makes me think that he has anything valuable to contribute to the community—he’s 100% confused. So it isn’t worth my time to try to persuade him to be less wrong.
I’m stating an expectation of a policy’s effectiveness.
I think Monkeymind is deliberately trying to gather lots of negative karma as fast as possible. Maybe for a bet?
If the goal was −100, then writing should stop now (prediction).
I’m not the one who downvoted you, but if I were to hazard a guess, I’d say your were downvoted because when you start off by saying “people like me”, it immediately sets off a warning in my head. That warning says that you have not separated personal identity from your judgment process. At the very least, by establishing yourself as a member of “people like me”, you signify that you have already given up on trying to be less wrong, and resigned yourself to being more wrong. (I strongly dislike using the terms “less wrong” and “more wrong” to describe elites and peasants of LW, but I’m using them to point out to you the identity you’ve painted for yourself.)
Also, there is /always/ something you can do about a problem. The answer to this particular problem is not, “Noobs will be noobs, let’s give up”.
If by “giving up on trying to be less wrong,” you mean I’m never going to be an expert on AI, decision theory, or philosophy of consciousness, then fine. I think that definition is idiosyncratic and unhelpful.
Raising the sanity line does not require any of those things.
Don’t put up straw men; I never said that to be less wrong, you had to do all those things. “less wrong” represents a attitude towards the world, not an endpoint.
Then I do not understand what you mean when you say I am “giving up on trying to be less wrong”
Could I get an explanation for the downvotes?
If someone like Eliezer Yudkowsky reads this then they probably think that the most important congnitive bias you have is that you are not interested in AI :-)
A comment by Eliezer Yudkowsky:
You’re selectively misquoting that comment, in particular removing the third criterion of importance listed in it that has nothing to do with AI. In context EY’s comment does not at all seem to dismiss non-FAI concerns, but in your recap it does. Fie.
I linked to the original comment. I didn’t mention the third point because I think that it is abundantly clear that Less Wrong has been created with the goal in mind of getting people to support SI:
The Sequences have been written with the goal in mind of convincing people of the importance of taking risks from AI seriously and therefore donate to SI: ”...after a few years of beating my head against the wall trying to get other people involved, I realized that I really did have to go back to the beginning, start over, and explain all the basics that people needed to know before they could follow the advanced arguments. Saving the world via AI research simply can’t compete against the Society for Treating Rare Diseases in Cute Kittens unless your audience knows about things like scope insensitivity...” (Reference: An interview with Eliezer Yudkowsky).
Less Wrong is used to ask for donations.
You can find a logo with a link to SI in the header and a logo and a link to LessWrong on SIAI’s frontpage.
LessWrong is mentioned as an achievement of SI (Quote: “Less Wrong is important to the Singularity Institute’s work towards a beneficial Singularity”).
A quote from the official SIAI homepage: “Less Wrong is [...] a key venue for SIAI recruitment”.
Now if you say that you don’t care about AI, that does pretty much exclude you from the group of people this community is meant to allure.
Nothing of what you just wrote justifies your changing the meaning of the comment you quoted by selectively removing parts of it you happen to think are not representative.
Regarding the rest of your comment: it both distorts history and makes irrelevant points. LessWrong was created as a rationality community, not an AI risk propaganda vehicle, even though yes, that was one of the goals (in fact, LW had an initial taboo period on the AI risk theme specifically to strengthen the other interests). The connections between LW and SIAI do not mean that one exists solely for the sake of the other. And finally, and most importantly, even if EY did create LW solely for the purpose of getting more money for SIAI—which I don’t believe—that’s no reason for other users of the site to obey the same imperative or share the same goal. I’m sympathetic towards SIAI but far from being convinced by them and I’m able to participate in LW just fine. I’m far from being alone in this. LW is what its userbase makes it.
The passive voice in “this community is meant to allure” makes it almost a meaningless statement. Who is doing the meaning? LW is what it is, and nobody has to care who it is “meant to allure”. It allures people who are drawn to topics discussed on it.
Note that as Eliezer says here
I expect to be able to find at least a dozen quotes where he contradicts himself there, if I cared enough to spend that much time on looking for them. Here are just a few:
(Please read up on the context.)
...
...
Given the evidence I find it hard to believe that he does not care if lesswrong members do not believe that AI risk is the most important issue today. I also don’t think that he would call someone a rationalist who has read everything that he wrote and decided not to care about AI risk.
You’ve got selective quotation down to an art form. I’m a bit jealous.
While true as written, it does not necessarily exclude the parent from the community as it is.
To argue this you would have to argue that as an average human being it would be rational not to care about AI. I welcome Eliezer or another SI member to tell me that I am wrong here. But I think that they believe that if you are interested in raising the sanity waterline and refining the art of rationality then you can’t at the same time ignore AI risk.
If you disagree with that then you disagree with Eliezer, who wrote the Sequences and who believes that they conclusively show that you should care about AI risk. If you disagree about this then you seem to fundamentally disagree with the idea of rationality this community was based on, or at least came to a different conclusion than the person who wrote most of its content. If that doesn’t exclude you from the community, then what does?
...or maybe you simply don’t share their utilitarian values.
No one really cares all that much whether other lesswrong participants care about AI risk. This isn’t an AI forum. We’ve had very few posts on the subject (relative to the level of interest in the subject that the authors of many of the posts happen to have). That subject was even banned here for a while.
Nothing, perhaps unfortunately. If I could freely wield an ‘exclude’-hammer (and there were no negative externalities for doing so) then consistent bad logic, excessive use of straw men and non-sequiturs, evident inability or unwillingness to learn and especially being overtly anti-social.
Maybe you could elaborate on this so that I understand it better. How could all those people who happen to contribute money to the mitigation of AI risk not “care all that much whether other lesswrong participants care about AI risk”? For that stance to make sense, the reason for their donations couldn’t possible be that they wish to support SI. Because then they would care a lot if other people took the cause seriously as well, since each person who takes AI risk seriously does increase the chance of SI receiving additional support.
And since this community is about rationality, and not just instrumental but also epistemic rationality, everyone who believes that AI risk is the most important issue that humanity faces right now should ask themselves why other rationalists do not care about it as well.
Either it is rational to be worried about AI risk or it isn’t. If it is rational then, given that this community is devoted to the art of rationality, one should care about people who do not take AI risk seriously. Since they might not only increase the support for that irrational stance but it does also hint at a general problem with the standards of the community and the effectiveness with which it teaches rationality.
You might argue that those people who do not care about AI risk simply have different preferences. I don’t see that there are many humans who have preferences that allow them to ignore their own demise and the end of all human values in the universe.
You might further argue that it isn’t irrational not to worry about AI risk. This might be true. Could you please tell me where I can find arguments that support that stance?
Donating a trivial amount to one charity is a big leap from ostracising all those that don’t.
It is irrational. It’s just not something there is any point being personally offended at or exclude from the local environment. On the other hand people not believing that correct reasoning about the likelyhood of events is that which most effectively approximates Bayesian updating have far more cause to be excluded from the site—because this is a site where that is a core premise.
I’m almost certain you are more likely to have collected such links than I. Because I care rather a lot less about controlling people’s beliefs on the subject.
On various occasions people have voiced an antipathy to my criticisms of AI risk. If the same people do not mind if other members do not care about AI risk, then it seems to be a valid conclusion that they don’t care what people believe as long as they do not criticize their own beliefs.
Those people might now qualify their position by stating that they only have an antipathy against poor criticisms of their beliefs. But this would imply that they do not mind people who do not care about AI risk for poor reasons as long as they do not voice their reasons.
But even a trivial amount of money is a bigger signal than the proclamation that you believe that people who do not care about AI risk are irrational and that they therefore do not fit the standards of this community. The former takes more effort than writing a comment stating the latter.
Other words could be used in the place of ‘poor’ that may more accurately convey what it is that bothers people. “Incessant” or “belligerent” would be two of the more polite examples of such. Some would also take issue with the “their beliefs” phrase, pointing out that the criticisms aren’t sufficiently informed to be actual criticisms of their beliefs rather than straw men.
It remains the case that people don’t care all that much whether other folks on lesswrong have a particular attitude to AI risk.
I read that as “The Leader says that we should care about X, so if you don’t care about X then you disagree with The Leader and must be shunned”. I have a lot of respect for Eliezer, but he is no god, and disagreement with him is not sacrilege. Hell, it’s right there in the name of the site—it’s not called “Right”, it’s called “Less Wrong”—as in, you’re still wrong about something, because everybody is.
I’m assuming one of these are true, because the alternative is some sort of reading comprehension failure. LW is not a community of people who lack irrational beliefs!
I think the barrier of entry is high enough—the signal-to-noise ratio is high, and if you only read high-karma posts and comments you are guaranteed to get substance.
As for forcing people to read the entire Sequences, I’d say rationalwiki’s critique is very appropriate (below). I myself have only read ~20% of the Sequences, and by focusing on the core sequences and highlighted articles, have recognized all the ideas/techniques people refer to in the main-page and discussion posts.
You should try reading the other 80% of the sequences.
As far as I can tell (low votes, some in the negative, few comments), the QM sequence is the least read of the sequences, and yet makes a lot of EY’s key points used later on identity and decision theory. So most LW readers seem not to have read it.
Suggestion: a straw poll on who’s read which sequences.
I’ve seen enough of the QM sequence and know enough QM to see that Eliezer stopped learning quantum mechanics before getting to density matrices. As a result, the conclusions he draws from QM rely on metaphysical assumptions and seem rather arbitrary if one knows more quantum mechanics. In the comments to this post Scott Aaronson tries to explain this to Eliezer without much success.
Could you be specific about which conclusions seem arbitrarily based on which metaphysical assumptions?
I just answered a similar question in another thread here.
Note: please reply there so we can consolidate discussions.
I’ve read it, but I took away less from it than any of the other sequences. Reading any of the other sequences, I can agree or disagree with the conclusion and articulate why. With the QM sequence, my response is more along the lines of “I can’t treat this as very strong evidence of anything because I don’t think I’m qualified to tell whether it’s correct or not.” Eliezer’s not a physicist either, although his level of fluency is above mine, and while I consider him a very formidable rationalist as humans go, I’m not sure he really knows enough to draw the conclusions he does with such confidence.
I’ve seen the QM sequence endorsed by at least one person who is a theoretical physicist, but on the other hand, I’ve read Mitchell Porter’s criticisms of Eliezer’s interpretation and they sound comparably plausible given my level of knowledge, so I’m not left thinking I have much more grounds to favor any particular quantum interpretation than when I started.
A poll would be good.
I’ve read the QM sequence and it really is one of the most important sequences. When I suggest this at meetups and such, people seem to be under the impression that it’s just Eliezer going off topic for a while and totally optional. This is not the case, the QM sequence is used like you said to develop a huge number of later things.
The negative comments from physicists and physics students are sort of a worry (to me as someone who got up to the start of studying this stuff in second-year engineering physics and can’t remember one dot of it). Perhaps it could do with a robustified rewrite, if anyone sufficiently knowledgeable can be bothered.
The negative comments I’ve heard give off a strong scent of being highly motivated - in one case an incredible amount of bark bark bark about how awful they were, and when I pressed for details, a pretty pathetic bite. I’d like to get a physicist who didn’t seem motivated to have an opinion one way or the other to comment.
It would need to be someone who bought MWI—if the sole problem with them is that they endorse MWI then that’s at least academically respectable, and if an expert reading them doesn’t buy MWI then they’ll be motivated to find problems in a way that won’t be as informative as we’d like.
The Quantum Physics Sequence is unusual in that normally, if someone writes 100,000(?) words explaining quantum mechanics for a general audience, they genuinely know the subject first: they have a physics degree, they have had an independent reason to perform a few quantum-mechanical calculations, something like that. It seems to me that Eliezer first got his ideas about quantum mechanics from Penrose’s Emperor’s New Mind, and then amended his views by adopting many-worlds, which was probably favored among people on the Extropians mailing list in the late 1990s. This would have been supplemented by some incidental study of textbooks, Feynman lectures, expository web pages… but nonetheless, that appears to be the extent of it. The progression from Penrose to Everett would explain why he presents the main interpretive choice as between wavefunction realism with objective collapse, and wavefunction realism with no collapse. His prose is qualitative just about everywhere, indicating that he has studied quantum mechanics just enough to satisfy himself that he has obtained a conceptual understanding, but not to the point of quantitative competence. And then he has undertaken to convey this qualitative conceptual understanding to other people who don’t have quantitative competence in the subject, either.
I can recognize all this because I am also an autodidact and I have done comparable things. It’s possible to do this and get away with making only a few incidental mistakes. But it is a very risky thing to do. You run a high risk of fooling yourself and then causing your audience to fool themselves too. This is especially the case in mathematical physics. Literally every day I see people asking questions on physics websites that are premised on wrong assumptions about physical theory. I don’t mean questions where people say “is it really true that...”, I mean questions where the questioner thinks they already understand some topic, and the question proceeds from this incorrect understanding, sometimes quite aggressively in tone (recently observed example).
My opinion about the Sequences is that someone who knows nothing about QM can learn from them, but it’s worth getting a second opinion, even just from Wikipedia, since they present a rather ideological point of view. Also, when you read them, you’re simply not hearing from someone who has used quantum mechanics professionally; you’re hearing from an autodidact who thinks he figured out the gist of the subject—I’d say, very roughly, he gets about 75% of the basics, and the problems are more in what is omitted rather than what is described (e.g. nothing, that I recall, about the role of operators) - and who has decided that one prominent minority faction of opinion among physicists (the many-worlds enthusiasts) are the ones who have correctly discerned the implications of QM for the nature of reality. The fact that he espouses, as the one true interpretation, a point of view that is shared by some genuine physicists, does at least protect him from the accusation of complete folly. Nonetheless, I can tell you—as one autodidact judging another—he’s backed the wrong horse. :-)
If you want an independent evaluation of the Sequences by physicists, I suggest that you post this as a question at Physics Stack Exchange. Ask what people think of them, and whether they can be trusted. There’s a commenter there, Ron Maimon, who is the most readily available approximation to what you want. Maimon is quantitatively competent in all sorts of advanced physics, and he was once a MWI zealot. Now he’s more positivistic, but MWI is still his preferred language for making ontological sense of QM. I would expect him to offer qualified approval of the Sequences, but to make some astute comments regarding content or style.
Since it is a forum where everyone gets a chance to answer the question, with the best replies then being voted up by the readership, of course such a question would also lead to responses by people who don’t believe MWI. But this is the quickest way to conduct the experiment you suggest.
Excellent idea—done. Thank you!
Result from Ron Maimon’s review of the QM sequence:
(more at the link from ciphergoth’s post)
You could also ask for an independent evaluation of AI risks here.
That seems less valuable. The QM sequences are largely there to set out what is supposed to be an existing, widespread understanding of QM. No such understanding exists for AI risk.
So why isnt that pointed out anywhere? EY seems oddly oblivious to his potential—indeed likely -- limitations as an autodictat.
This was a big concern I had reading it. Much of it made sense to me, as someone who has had formal education in basic quantum, and some of it felt very illuminating(the waveform-addition stuff in particular was taught far better than my quantum prof ever managed), but I’m always skeptical of people claiming Truth of a controversy in a highly technical field with no actual training in that field. I’ve always preferred many-worlds, but I would never claim it is the sole truth in the sort of way that EY did.
What reason do I have to believe that this risk isn’t even stronger when it comes to AI?
It’s not clear how to compare said risk—“quantum” is far more widely abused—but the creationist AI researcher suggests AI may be severely prone to the problem. Particularly as humans are predisposed to think of minds as ontologically basic, therefore pretty simple, therefore something they can have a meaningful opinion on, regardless of the evidence to the contrary.
What, you mean the part where we’re discussing a field that’s still highly theoretical, with no actual empirical evidence whatsoever, and then determining that it is definitely the biggest threat to humanity imaginable and that anyone who doesn’t acknowledge that is a fool?
This is one of the classic straw men, adaptable to any purpose.
Mockery is generally rather adaptable, yes.
I suspect a lot of it is “oh dear, someone saying ‘quantum’” fatigue. But that sounds a plausible approach.
Yes.
No, as far as I can tell.
Probably not, then. (The decision theory posts were where I finally hit a tl;dr wall.)
Something I recall noticing at the time I read said posts is that some of the groundwork you mention didn’t necessarily need to be in with the QM. Sure, there are a few points that you can make only by reference to QM but many of the points are not specifically dependent on that part of physics. (ie. Modularization fail!)
That there are no individual particles is something of philosophical import that it’d be difficult to say without bludgeoning the point home, as the possibility is such a strong implicit philosophical assumption and physics having actually delivered the smackdown may be surprising. But yeah, even that could be moved elsewhere with effort. But then again, the sequences are indeed being revised and distilled into publishable rather than blog form …
Yes, that’s the one thing that really relies on it. And the physics smackdown was surprising to me when I read it.
Ideal would seem to be having the QM sequence then later having an identity sequences wherein one post does an “import QM;”.
Of course the whole formal ‘sequence’ notion is something that was invented years later. These are, after all, just a stream of blog posts that some guy spat out extremely rapidly. At that time they were interlinked as something of a DAG, with a bit of clustering involved for some of the bigger subjects.
I actually find the whole ‘sequence’ focus kind of annoying. In fact I’ve never read the sequences. What I have read a couple of times is the entire list of blog posts for several years. This includes some of my favorite posts which are stand alone and don’t even get a listing in the ‘sequences’ page.
Yes! I try to get people to read the “sequences” in ebook form, where they are presented in simple chronological order. And the title is “Eliezer Yudkowsky, blog posts 2006-2010″.
Totally, there are whole sequences of really good posts that get no mention in the wiki.
Working on it.
In all seriousness though, I often find the Sequences pretty cumbersome and roundabout. Eliezer assumes a pretty large inferential gap for each new concept, and a lot of the time the main point of an article would only need a sentence or two for it to click for me. Obviously this makes it more accessible for concepts that people are unfamiliar with, but right now it’s a turn-off and definitely is a body of work that will be greatly helped by being compressed into a book.
Fuck you.
Downvoted for linking to that site.
… what?
It’s both funny and basically accurate. I’d say it’s a perfectly good link.
David is making a joke, because he wrote most of the content of that article.
Tetronian started the article, so it’s his fault actually, even if he’s pretty much moved here.
I have noted before that taking something seriously because it pays attention to you is not in fact a good idea. Every second that LW pays a blind bit of notice to RW is a second wasted.
See also this comment on the effects of lack of outside world feedback, and a comparison to Wikipedia (which basically didn’t get any outside attention for four or five years and is now part of the infrastructure of society, at which I still boggle).
And LW may or may not be pleased that even on RW, when someone fails logic really badly the response is often couched in LW terms. So, memetic infections ahoy! Think of RW as part of the Unpleasable Fanbase.
Memetic hazard warning!
ITYM superstimulus ;-)
Not really. There is content there that is not completely useless. Especially if the ‘seconds wasted’ come out of time that would have otherwise been spent on lesswrong itself.
Ahhhh. Well, that flips my downvote.
Oh, that explains a lot!
It’s not a barrier to entry if no one actually HAS to surmount it.
Yeah, but if we make a policy of abusing and hounding out anyone who hasn’t, it’s not much better.
Kahneman and Tversky’s Thinking Fast and Slow is basically the sequences + some statistics—AI and metaethics in (shorter) book form (well actually, the other way around, as the book was there first). So perhaps we should say “read the sequences, or that book, or otherwise learn the common mistakes”.
Strongly disagree; I think there is fairly limited overlap between the two.
Your comment describes (or at least intends to describe as per the people disagreeing with you) Judgment under Uncertainty: Heuristics and Biases, not Thinking Fast and Slow.
Can someone verify this for me? I’ve heard good things about the authors but my prior for that book containing everything in the (or most of the) sequences is rather low.
I disagree with the grandparent. I read the book a while ago having already read most of the Sequences—I think that the book gives a fairly good overview of heuristics and biases but doesn’t do as good of a job in turning the information into helpful intuitions. I think that the Sequences cover most (but not quite all) of what’s covered in the book, while the reverse is not true.
Lukeprog reviewed the book here: his estimate is that it contains about 30% of the Core Sequences.
The reasoning for downvote on this suggestion is not clear. What does the downvoter actually want less of?
As the suggestion stands, it’s at −2. I’m not downvoting it because I don’t think it’s so bad as to be invisible, but saying that the book is a good substitute for the sequences seems inaccurate enough to downvote. My other comment here contains (slightly) more of an explanation.
From Shirky’s Essay on online groups: “The Wikipedia right now, the group collaborated online encyclopedia is the most interesting conversational artifact I know of, where product is a result of process. Rather than “We’re specifically going to get together and create this presentation” it’s just “What’s left is a record of what we said.”
When somebody goes to a wiki, they are not oging there to discuss elementary questions that have already been answered; they are going there to read the results of that discussion. Isn’t this basically what the OP wants?
Why aren’t we using the wiki more? We have two modes of discussion here: discussion board, and wiki. The wiki serves more as an archive of the posts that make it to main-page level, meaning that all the hard work of the commenters in the discussion boards is often lost to the winds of time. (Yes, some people have exceptionally good memory and link back to them. But this is obviously not sustainable.) If somebody has a visionary idea on how to lubricate the process of collating high-quality comments and incorporating them into a wiki-like entity, then I suspect our problem could be solved.
This is a really good question.
I don’t use the wiki because me LW acount is not valid there. You need to make a seperate acocunt for the wiki.
That seems like an utterly stupid reason in retrospect, but I imagine that’s a big reason why no one is wikiing.
So its a trivial inconvenience?
It is explicitly mentioned (somewhere) that the wiki is only for referencing ideas and terms that have been used/discussed/explained in LW posts.
So, yes, inconvenience, but not solely.
The best way to become more exclusive while not giving the impression of a cult, or by banning people, is by raising your standards and being more technical. As exemplified by all the math communities like the n-Category Café or various computer science blogs (or most of all technical posts of lesswrong).
Stop using that word.
In fact, edit your post now please Nyan. Apart from that it’s an excellent point. “Community”, “website” or just about anything else.
“You’re a ….” is already used as a fully general counterargument. Don’t encourage it!
I want to keep the use of the word, but to hide it from google I have replaced it with it’s rot13: phyg
And now we can all relax and have a truly uninhibited debate about whether LW is a phyg. Who would have guessed that rot13 has SEO applications?
Just to be clear, we’re all reading it as-is and pronouncing it like “fig”, right? Because that’s how i read it in my head.
I hope so, or this would make even less sense than it should.
I’ve been pronouncing it to rhyme with the first syllable in “tiger”.
No; stop. This ‘fix’ is ineffective and arguably worse.
The C-word is still there in the post URL!
Well, yes and no.
That’s much better!
(I hadn’t realised the post titles were redundant in Reddit code …)
Upvoted for agreeing and for reminding me to re-read a certain part of the sequences. I loath fully general counterarguments, especially that one.
That being said, would it be appropriate for you to edit your own comment to remove said word? I don’t know (to any signifigant degree) how Google’s search algorithms work, but I suspect that having that word in your comment also negatively affects the suggested searches.
Oh, yeah, done.
You mean the one that shouldn’t be associated with us in google’s search results?
I’ll think about it.
Suggestion: “Our Ult-Cay Is Not Exclusive Enough”
I feel pain just looking at that sentence.
I sure as hell hope self-censorship or encryption for the sake of google results isn’t going to become the expected norm here. It’s embarassingly silly, and, paradoxically, likely to provide ammunition for anyone who might want to say that we are this thing-that-apparently-must-not-be-named. Wouldn’t be overly suprrised if these guys ended up mocking it.
The original title of the post had a nice impact, the point of the rhetorical technique used was to boldly use a negatively connotated word. Now it looks weird and anything but bold.
Also, reading the same rot13-ed word multiple times caused me to learn a small portion of rot13 despite my not wanting to. Annoying.
Yes, well… I don’t give a phyg.
Your comment would have been ridiculously enhanced by this link.
What word?
The only word that shouldn’t be used for reasons that extend to not even identifying it. (google makes no use/mention distinction).
“In a riddle whose answer is chess, what is the only prohibited word?”
Are you referring to the word cult? I think it might be an attempt on the part of nyan_sandwich to remove its negative connotations but I’m not sure that’s the best idea from a publicity point of view.
dude. WTF.
What does the asterisk hide from me?
The word ult-cay (without the pig latin) repeated ~several thousand times. Like, it didn’t fit on my laptop screen.
Just this giant wall of ult-cay.
What?
I took a screenshot of the original post, and will post it if you’d care to continue pretending innocent.
Like this? http://i.imgur.com/npXFq.png
hahahahahahaha
See, this version of the comment is roughly a thousand times less objectionable.
Reading the comments, it feels like the biggest concern is not chasing away the initiates to our phyg. Perhaps tiered sections, where demonstrable knowledge in the last section gains you access to higher levels of signal to noise ratio? Certainly would make our phyg resemble another well known phyg.
Maybe we should charge thousands of dollars for access to the sequences as well? And hire some lawyers...
More seriously, I wonder what people’s reaction would be to a newbie section that wouldn’t be as harsh as the now-much-harsher normal discussion. This seems to go over well on the rest of the internet.
Sort of like raising the price and then having a sale...
This sounds like a good idea, but I think it might be too difficult to implement in practice, as determined users will bend their efforts toward guessing the password in order to gain access to the coveted Inner Circle. This isn’t a problem for that other phyg, because their access is gated by money, not understanding.
I think the freemasons have this one solved for us: instead of a passwords, we use interview systems, where people of the level above have to agree that you are ready before you are invited to the next level. Likewise, we make it known that helpful input on the lower levels is one of the prerequisites to gaining a higher level- we incentivise constructive input on the lower tiers, and effectively gate access to the higher tiers.
Why does this solution need to be so global ? Why don’t we simply allow users to blacklist/whitelist other users as they see fit, on an individual basis ? This way, if someone wants to form an ultra-elite cabal, they can do that without disturbing the rest of the site for anyone else.
So, who is going to sit on the interview committee to control access to a webforum? You’re asking more of the community than it will ever give you, because what you advocate is an absurd waste of time for any actual person.
The SCP Foundation creepypasta wiki used to use a very complex application system, designed to weed out those with insufficient writing skill. It turned away a fairly significant number of potential writers due to its sheer size. It was also maintained through Google Docs by one dedicated admin for several years. I’m not sure anyone here would give up their free time to maintaining bureaucracy rather than winning, and it seems counterproductive to me, but it’s theoretically possible that it can be kept to a part-time job.
That’s possible- it may be that the cost of doing this effectively is not worth the gain, or that there is a less intensive way to solve this issue. However, I think there could be benefits to a tiered structure- perhaps even have the levels be read only for those not there yet- so everyone can read the high signal to noise, but we still make sure the protect it. I do know there is much evidence to suggest the prestige among even small groups is enough to motivate people to do things that normally would be considered an absurd waste of time.
You’re not proposing a different system, you’re just proposing additional qualifiers.
Sounds like a good idea, would be an incentive for reading and understanding the sequences to many people and could raise the quality level in the higher ‘levels’ considerably. There are also downsides: We might look more phyg-ish to newbies, discussion quality at the lower levels could fall rapidly (honestly, who wants to debate about ‘free will’ with newbies when they could be having discussions about more interesting and challenging topics?) and, well, if an intelligent and well-informed outsider has to say something important about a topic, they won’t be able to.
For this to be implemented, we’d need a user rights system with the respective discussion sections as well as a way to determine the ‘level’ of members. Quizzes with questions randomly drawn from a large pool of questions with a limited number of tries per time period could do well, especially if you don’t give any feedback about the scoring other than ‘you leveled up!’ and ‘Your score wasn’t good enough, re-read these sequences:__ and try again later.’
And, of course, we need the consent of many members and our phyg-leaders as well as someone to actually implement it.
Instead of setting up gatekeepers, why not let people sort themselves first?
No one wants to be a bozo. We have different interests and aptitudes. Set up separate forums to talk about the major sequences, so there’s some subset of the sequences you could read to get started.
I’d suggest too that as wonderful as EY is, he is not the fount of all wisdom. Instead of focusing on getting people to shut up, how about focusing on getting people to add good ideas that aren’t already here?
Depending on other factors, it could also resemble a school system.
Rationology?
Edit: I apologize.
What you want is an exclusive club. Not a cult or phyg or whatever.
There’s only one letter’s difference between ‘club’ and ‘phyg’!
And there is only one letter’s difference between paid and pain. The meaning of an English word is generally not determined by the letters it contains.
I personally come to Less Wrong specifically for the debates (well, that, and HP:MoR Wild Mass Guessing). Therefore, raising the barrier to entry would be exactly the opposite of what I want, since it would eliminate many fresh voices, and limit the conversation to those who’d already read all of the sequences (a category that would exclude myself, now that I think about it), and agree with everything said therein. You can quibble about whether such a community would constitute a “phyg” or not, but it definitely wouldn’t be a place where any productive debate could occur. People who wholeheartedly agree with each other tend not to debate.
Oh, and by the way, there are other strategies for dispelling the public perception of your community being a “phyg”, besides using rot13. Not being an ultra-exclusive “phyg” is one of such strategies. If you find yourself turning to rot13 instead, then IMO the battle has already been lost.
I don’t see why having the debate at a higher level of knowledge would be a bad thing. Just because everyone is familar with a large bit of useful common knowledge doesn’t mean no-one disagrees with it, or that there is nothing left to talk about. There are some LW people who have read everything and bring up interesting critiques.
Imagine watching a debate between some uneducated folks about whether a tree falling in a forest makes a sound or not. Not very interesting. Having read the sequences it’s the same sort of boring as someone explaining for the millionth time that “no, technological progress or happyness is not a sufficient goal to produce a valuable future, and yes, an AI coded with that goal would kill us all, and it would suck”.
The point of my post was that that is not an acceptable solution.
Firstly, a large proportion of the Sequences do not constitute “knowledge”, but opinion. It’s well-reasoned, well-presented opinion, but opinion nonetheless—which is great, IMO, because it gives us something to debate about. And, of course, we could still talk about things that aren’t in the sequences, that’s fun too. Secondly:
No, it’s not very interesting to you and me, but to the “uneducated folks” whom you dismiss so readily, it might be interesting indeed. Ignorance is not the same as stupidity, and, unlike stupidity, it’s easily correctable. However, kicking people out for being ignorant does not facilitate such correction.
What’s your solution, then ? You say,
To me, “more exclusive LW” sounds exactly like the kind of solution that doesn’t work, especially coupled with “enforcing a little more strongly that people read the sequences” (in some unspecified yet vaguely menacing way).
Whether the sequences constitute knowledge is beside the point—they constitute a baseline for debate. People should be familiar with at least some previously stated well-reasoned, well-presented opinions before they try to debate a topic, especially when we have people going through the trouble of maintaining a wiki that catalogs relevant ideas and opinions that have already been expressed here. If people aren’t willing or able to pick up the basic opinions already out there, they will almost never be able to bring anything of value to the conversation. Especially on topics discussed here that lack sufficient public exposure to ensure that at least the worst ideas have been weeded out of the minds of most reasonably intelligent people.
I’ve participated in a lot of forums (mostly freethough/rationality forums), and by far the most common cause of poor discussion quality among all of them was a lack of basic familiarity with the topic and the rehashing of tired, old, wrong arguments that pop into nearly everyone’s head (at least for a moment) upon considering a topic for the first time. This community is much better than any other I’ve been a part of in this respect, but I have noticed a slow decline in this department.
All of that said, I’m not sure if LW is really the place for heavily moderated, high-level technical discussions. It isn’t sl4, and outreach and community building really outweigh the more technical topics, and (at least as long as I’ve been here) this has steadily become more and more the case. However, I would really like to see the sort of site the OP describes (something more like sl4) as a sister site (or if one already exists I’d like a link). The more technical discussions and posts, when they are done well, are by far what I like most about LW.
I agree with pretty much everything you said (except for the sl4 stuff, because I haven’t been a part of that community and thus have no opinion about it one way or another). However, I do believe that LW can be the place for both types of discussions—outreach as well as technical. I’m not proposing that we set the barrier to entry at zero; I merely think that the guideline, “you must have read and understood all of the Sequences before posting anything” sets the barrier too high.
I also think that we should be tolerant of people who disagree with some of the Sequences; they are just blog posts, not holy gospels. But it’s possible that I’m biased in this regard, since I myself do not agree with everything Eliezer says in those posts.
Disagreement is perfectly fine by me. I don’t agree with the entirety of the sequences either. It’s disagreement without looking at the arguments first that bothers me.
What is the difference between knowledge and opinion? Are the points in the sequences true or not?
Read map and territory, and understand the way of Bayes.
The thing is, there are other places on the internet where you can talk to people who have not read the sequences. I want somewhere where I can talk to people who have read the LW material, so that I can have a worthwile discussion without getting bogged down by having to explain that there’s no qualitative difference between opinion and fact.
I don’t have any really good ideas about how we might be able to have an enlightened discussion and still be friendly to newcomers. Identifying a problem and identifying myself among people who don’t want a particular type of solution (relaxing LW’s phygish standards), doesn’t mean I support any particular straw-solution.
Some proportion of them (between 0 and 100%) are true, others are false or neither. Not being omniscient, I can’t tell you which ones are which; I can only tell you which ones I believe are likely to be true with some probability. The proportion of those is far smaller than 100%, IMO.
See, it’s exactly this kind of ponderous verbiage that leads to the necessity for rot13-ing certain words.
I believe that there is a significant difference between opinion and fact, though arguably not a qualitative one. For example, “rocks tend to fall down” is a fact, but “the Singularity is imminent” is an opinion—in my opinion—and so is “we should kick out anyone who hadn’t read the entirety of the Sequences”.
When you said “we should make LW more exclusive”, what did you mean, then ?
In any case, I do have a solution for you: why don’t you just code up a Greasemonkey scriptlet (or something similar) to hide the comments of anyone with less than, say, 5000 karma ? This way you can browse the site in peace, without getting distracted by our pedestrian mutterings. Better yet, you could have your scriptlet simply blacklist everyone by default, except for certain specific usernames whom you personally approve of. Then you can create your own “phyg” and make it as exclusive as you want.
This would disrupt the flow of discussion.
I tried this on one site. The script did hide the offending comments from my eyes, but other people still saw those comments and responded to them. So I did not have to read bad comments, but I had to read the reactions on them. I could have improved by script to filter out those reactions too, but...
Humans react to the environment. We cannot consciously decide to filter out something and refuse to be influenced. If I come to a discussion with 9 stupid comments and 1 smart comment, my reaction will be different than if there was only the 1 smart comment. I can’t filter those 9 comments out. Reading them wastes my time and changes my emotions. So even if you filter those 9 comments out by software, but I won’t, then the discussion between two of us will be indirectly influenced by those comments. Most probably, if I see 9 stupid comments, I will stop reading the article, so I will skip the 1 smart one too.
People have evolved some communication strategies that don’t work on internet, because a necessary infrastructure is missing. If we two would speak in the real world, and a third person tried to join our discussion, but I consider them rather stupid, you would see it in my body language even if I wouldn’t tell the person openly to buzz off. But when we speak online, and I ignore someone’s comments, you don’t see it; this communication channel is missing. Karma does something like this, it just represents the collective emotion instead of individual emotion. (Perhaps a better approximation would be if the software allowed you to select people you consider smart, and then you would see karma based only on their clicks.)
Creating a good virtual discussion is difficult, because our instincts are based on different assumptions.
I see, so you felt that the comments of “smart” (as per your filtering criteria) people were still irrevocably tainted by the fact that they were replying to “stupid” (as per your filtering criteria) people. In this case, I think you could build upon my other solution. You could blacklist everyone by default, then personally contact individual “smart” people and invite them to your darknet. The price of admission is to blacklist everyone but yourself and the people you personally approve of. When someone breaks this policy, you could just blacklist them again.
Slashdot has something like this (though not exactly). I think it’s a neat idea. If you implemented this, I’d even be interested in trying it out, provided that I could see the two scores (smart-only as well as all-inclusive) side-by-side.
And everyone’s assumptions are different, which is why I’m very much against global solutions such as “ban everyone who hadn’t read the Sequences”, or something to that extent.
Personally, though, I would prefer to err on the side of experiencing negative emotions now and then. I do not want to fall into a death spiral that leads me to forming a cabal of people where everyone agrees with each other, and we spend all day talking about how awesome we are—which is what nearly always happens when people decide to shut out dissenting voices. That’s just my personal choice, though; anyone else should be able to form whichever cabal they desire, based on their own preferences.
The first step (blacklisting everyone except me and people I approve of) is easy. Expanding network depends on other people joining the same system, or at least willing to send me a list of people they approve of. I think that most people use default settings, so this system would work best on a site where this would be the default setting.
It would be interesting to find a good algorithm, which would have the following data on input: each user can put other users on their whitelist or blacklist, and can upvote or downvote comments by other users. It could somehow calculate the similarity of opinions and then show everyone the content they want (extrapolated volition) to see. (The explicit blacklists exist only to override the recommendations of the algorithm. By default, an unknown and unconnected person is invisible, except for their comments upvoted by my friends.)
If the site is visible for anonymous readers, a global karma is necessary. Though it can be somehow calculated from the customized karmas.
I also wouldn’t like to be shielded from disagreeing opinions. I want to be shielded from stupidity and offensiveness, to protect my emotions. Also, because my time is limited, I want to be shielded from noise. No algorithm will be perfect in filtering out the noise and not filtering out the disagreement.
I think a reasonble approach is to calculate the probability of “reasonable disagreement” based on the previous comments. This is something that we approximately do in real life—based on our previous experience we take some people’s opinion more seriously, so when someone disagrees with us, we react differently based on who it is. If I agree with someone about many things, then I will consider their opinion more seriously when we disagree. However if someone disagrees about almost everything, I simply consider them crazy.
I think this is a minor convenience at best; when you choose to form your darknet, you could simply inform the other candidates of your plan: via email, PM, or some other out-of-band channel.
This sounds pretty similar to Google’s PageRank, only for comments instead of pages. Should be doable.
Yes, of course. The goal is not to turn the entire site exclusively into your darknet, but to allow you to run your darknet in parallel with the normal site as seen by everyone else.
Agreed; if you could figure out a perfect filtering algorithm, you would end up implementing an Oracle-grade AI, and then we’d have a whole lot of other problems to worry about :-)
That said, I personally tend to distrust my emotions. I’d rather take an emotional hit, than risk missing some important point just because it makes me feel bad; thus, I wouldn’t want to join a darknet such as yours. That’s just me though, your experience is probably different.
I mean that I’d like to be able to participate in discussion with better (possibly phygish) standards. Lesswrong has a lot of potential and I don’t think we are doing as well as we could on the quality of discusson front. And I think making Lesswrong purely more open and welcoming without doing something to keep a high level of quality somewhere is a bad idea. And I’m not afraid of being a phyg.
That’s all, nothing revolutionary.
It seems like my proposed solution would work for you, then. With it, you can ignore anyone who isn’t enlightened enough, while keeping the site itself as welcoming and newbie-friendly as it currently is.
I’m not afraid of it either, I just don’t think that power-sliding down a death spiral is a good idea. I don’t need people to tell me how awesome I am, I want them to show me how wrong I am so that I can update my beliefs.
Specifically ‘the way of’. Would you have the same objection with ‘and understand how bayesian updating works’? (Objection to presumptuousness aside.)
Probably. The same sentiment could be expressed as something like this:
This phrasing is still a bit condescending, but a). it gives an actual link for me to read an educate my ignorant self, and b). it makes the speaker sound merely like a stuck-up long-timer, instead of a creepy phyg-ist.
Educating people is like that!
What I would have said about the phrasing is that it is wrong.
Merely telling people that they aren’t worthy is not very educational; it’s much better to tell them why you think they aren’t worthy, which is where the links come in.
Sure, but I have no problem with people being wrong, that’s what updating is for :-)
Huh? This was your example, one you advocated and one that includes a link. I essentially agreed with one of your points—your retort seems odd.
Huh again? You seemed to have missed a level of abstraction.
I want to upvote you for being right about the phygvfu language and where to put the filter.
I want to downvote you for just about evetrything else. You are discourteous, butthurt, and there’s more than a bit of Dunning-Krueger stuck in your teeth.
Breathe, son. He don’t wanna kick you out, he just wishes it didn’t seem like such a damn good idea.
Well, that certainly wasn’t my intention. Please go ahead and downvote me if you feel that way.
I wasn’t taking anything he said personally; as far as I can tell, there’s nothing he can do to actually kick me out, and I don’t think he even wants to do that in the first place. I do believe he’s arguing in good faith.
That said, I do believe strongly that, in order for communities to grow and continue being useful and productive, they need to welcome new members now and then; and I think that nyan_sandwich’s original solution sets the barrier to entry way too high.
You’re too kind. Of course I already did. I just wish you’d somehow split up the things I wanted to respond.
Aside, I didn’t downvote the post I quoted and I don’t know why someone would. Maybe because we’re speaking pointlessly? Maybe because they thought I was trolling and you were feeding me?
A ‘debate club’ mindset is one of the things I would try to avoid. Debates emerge when there are new ideas to be expressed and new outlooks or bodies of knowledge to consider—and the supply of such is practically endless. You don’t go around trying to artificially encourage an environment of ignorance just so some people are sufficiently uninformed that they will try to argue trivial matters. That’s both counterproductive and distasteful.
I would not be at all disappointed if a side effect of maintaining high standards of communication causes us to lose some participants who “come to Less Wrong specifically for the debates”. Frankly, that would be among the best things we could hope for. That sort of mindset is outright toxic to conversations and often similarly deleterious to the social atmosphere.
I wasn’t suggesting we do that, FWIW.
I think there’s a difference between flame wars and informed debate. I’m in favor of the latter, not the former. On the other hand, I’m not a big fan of communities where everyone agrees with everyone else. I acknowledge that they can be useful as support groups, but I don’t think that LW is a support group, nor should it become one. Rationality is all about changing one’s beliefs, after all...
Debate is a tool for achieving truth. Why is that such a terrible thing?
I didn’t say it was. Please read again.
You said that we should avoid debate because it’s bad for the social atmosphere. I’m not seeing much difference.
No I didn’t. I said we should avoid creating a deliberate environment of ignorance just so that debate is artificially supported. To the extent that debate is a means to an end it is distinctly counterproductive to deliberately sabotage that same end so that more debate is forced.
See also: Lost purpose.
Upon rereading, I think I see what you’re getting at, but you seem to be arguing from the principle that creating ignorance is the preferred way to create debate. That seems ahem non-obvious to me. There’s no shortage of topics where informed debate is possible, and seeking to debate those does not require(and, in fact, generally works against) promoting ignorance. Coming here for debate does not imply wanting to watch an intellectual cripplefight.
I seem to be coming from a position of making a direct reply to Bugmaster with the specific paragraph I was replying to quoted. That should have made the meaning more obvious to you.
Which is what I myself advocated with:
(italics mine)
How did you arrive at that idea?
The point isn’t to agree with the stuff, but to be familiar with it, with standard arguments that the Sequences establish. If you tried to talk advanced mathematics/philosophy/whatever with people, and didn’t know the necessary math/philosophy/whatever, people would tell you some equivalent of “read the sequences”.
This is not the rest of the Internet, where everyone is entitled to their opinion and the result is that discussions never get anywhere (in reality, nobody is really interested in anyone’s mere opinion, and the result is something like this). If you’re posting uninformedly and rehashing old stuff or committing errors the core sequences teach you not to commit, you’re producing noise.
This is what i love about LW. There is an actual signal to noise ratio, rather than a sea of mere opinion.
nyan_sandwich said that the Sequences contain not merely arguments, but knowledge. This implies a rather high level of agreement with the material.
I agree, but:
I am perfectly fine with that, as long as they don’t just say, “read all of the Sequences and then report back when you’re ready”, but rather, “your arguments have already been discussed in depth in the following sequence: $url”. The first sentence merely dismisses the reader; the second one provides useful material.
Yesss … the sequences are great stuff, but they do not reach the level of constituting settled science.
They are quite definitely settled tropes, but that’s a different level of thing. Expecting familiarity with them may (or may not) be reasonable; expecting people to treat them as knowledge is rather another thing.
Hm, that’s a little tricky. I happen to agree that they contain much knowledge—they aren’t pure knowledge, there is opinion there, but there is a considerable body of insight and technique useful to a rationalist (that is, useful if you want to be good at arriving at true beliefs or making decisions that achieve your goals). Enough that it makes sense to want debate to continue from that level, rather than from scratch.
However, let’s keep our eyes on the ball—that being the true expectation around here. The expectation is emphatically NOT that people should agree with the material in the Sequences. Merely that we don’t have to re-hash the basics.
Besides, if you manage to read a sequence, understand it, and still disagree, that means your reply is likely to be interesting and highly upvoted.
Hm. Yeah, I wouldn’t want anyone to actually be told “read all the sequences” (and afaik this never happens). It’d be unreasonable to, say, expect people to read the quantum mechanics sequence if they don’t intend to discuss QM interpretations. However, problems like what is evidence and how to avoid common reasoning failures are relevant to pretty much everything, so I think an expectation of having read Map and Territory and Mysterious Answers would be useful.
Agreed.
I emphatically agree with you there, as well; but by making this site more “phygvfu”, we risk losing this capability.
I agree that these are very useful concepts in general, but I still maintain that it’s best to provide the links to these posts in context, as opposed to simply locking out anyone who hadn’t read them—which is what nyan_sandwich seems to be suggesting.
Trouble is, I’m not really sure what nyan_sandwich is suggesting, in specific and concrete terms, over and above already existing norms and practices. “I wish we had higher quality debate” is not a mechanism.
Upvoted.
I agree pretty much completely and I think if you’re interested in Less Wrong-style rationality, you should either read and understand the sequences (yes, all of them), or go somewhere else. Edit, after many replies: This claim is too strong. I should have said instead that people should at least be making an effort to read and understand the sequences if they wish to comment here, not that everyone should read the whole volume before making a single comment.
There are those who think rationality needs to be learned through osmosis or whatever. That’s fine, but I don’t want it lowering the quality of discussion here.
I notice that, in topics that Eliezer did not explicitly cover in the sequences (and some that he did), LW has made zero progress in general. This is probably one of the reasons why.
An IRC conversation I had a while ago left me with a powerful message: people will give lip service to keeping the gardens, but when it comes time to actually do it, nobody is willing to.
This is a pretty hardcore assertion.
I am thinking of lukeprog’s and Yvain’s stuff as counterexamples.
I think of them (and certain others) as exceptions that prove the rule. If you take away the foundation of the sequences and the small number of awesome people (most of whom, mind you, came here because of Eliezer’s sequences), you end up with a place that’s indistinguishable from the programmer/atheist/transhumanist/etc. crowd, which is bad if LW is supposed to be making more than nominal progress over time.
Standard disclaimer edit because I have to: The exceptions don’t prove the rule in the sense of providing evidence for the rule (indeed, they are technically evidence contrariwise), but they do allow you to notice it. This is what the phrase really means.
Considering how it was subculturally seeded, this should not be surprising. Remember that LW has proceeded in a more or less direct subcultural progression from the Extropians list of the late ’90s, with many of the same actual participants.
It’s an online community. As such, it’s a subculture and it’s going to work like one. So you’ll see the behaviour of an internet forum, with a bit of the topical stuff on top.
How would you cut down the transhumanist subcultural assumptions in the LW readership?
(If I ever describe LW to people these days it’s something like “transhumanists talking philosophy.” I believe this is an accurate description.)
Transhumanism isn’t the problem. The problem is that when people don’t read the sequences, we are no better than any other forum of that community. Too many people are not reading the sequences, and not enough people are calling them out on it.
Your edit updated me in favour of me being confused about this exception-rule business. Can you link me to something?
-Wikipedia (!!!)
(I should just avoid this phrase from now on, if it’s going to cause communication problems.)
I suspect the main cause of misunderstanding (and subsequent misuse) is omission of the relative pronoun “that”. The phrase should always be “[that is] the exception that proves the rule”, never “the exception proves the rule”.
Probably even better to just include “in cases not so excepted” at the end.
I’d always thought they prove the rule in the sense of testing it.
Exceptions don’t prove rules.
You are mostly right, which is exactly what I was getting at with the “promoted is the only good stuff” comment.
I do think there is a lot of interesting, useful stuff outside of promoted, tho, it’s just mixed with the usual programmer/atheist/transhumanist/etc-level stuff.
Um, after I read the sequences I ploughed through every LW post from the start of LW to late 2010 (when I started reading regularly). What I saw was that the sequences were revered, but most of the new and interesting stuff from that intervening couple of years was ignored. (Though it’s probably just me.)
At this point A Group Is Its Own Worst Enemy is apposite. Note the description of the fundamentalist smackdown as a stage communities go through. Note it also usually fails when it turns out the oldtimers have differing and incompatible ideas on what the implicit constitution actually was in the good old days.
tl;dr declarations of fundamentalism heuristically strike me as inherently problematic.
edit: So what about this comment rated a downvote?
edit 2: ah—the link to the Shirky essay appears to be giving the essay in the UK, but Viagra spam in the US o_0 I’ve put a copy up here.
I suspect that’s because it’s poorly indexed. This should be fixed.
This is very much why I have only read some of it.
If the more recent LW stuff was better indexed, that would be sweet.
Exchanges the look two people give each other when they each hope that the other will do something that they both want done but which neither of them wants to do.
Hey, I think “Dominions” should be played but do want to play it and did purchase the particular object at the end of the link. I don’t understand why you linked to it though.
The link text is a quote from the game description.
Ahh, now I see it. Clever description all around!
Yeah, I didn’t read it from the wiki index, I read it by going to the end of the chronological list and working forward.
Am I in some kind of internet black-hole? That link took me to some viagra spam site.
It’s a talk by Clay Shirky, called “A Group Is Its Own Worst Enemy”.
I get the essay … looking in Google, it appears someone’s done some scurvy DNS tricks with shirky.com and the Google cache is corrupted too. Eegh.
I’ve put up a copy here and changed the link in my comment..
shirky.com/writings/group_enemy.html
???
I thought it was great. Very good link.
It’s a revelatory document. I’ve seen so many online communities, of varying sizes, go through precisely what’s described there.
(Mark Dery’s Flame Wars (1994) - which I’ve lost my copy of, annoyingly—has a fair bit of material on similar matters, including one chapter that’s a blow-by-blow description of such a crisis on a BBS in the late ’80s. This was back when people could still seriously call this stuff “cyberspace.” This leads me to suspect the progression is some sort of basic fact of online subcultures. This must have had serious attention from sociologists, considering how rabidly they chase subcultures …)
LW is an online subcultural group and its problems are those of online subcultural groups; these have been faced by many, many groups in the past, and if you think they’re reminiscent of things you’ve seen happen elsewhere, you’re likely right.
Maybe if you reference Evaporative Cooling, which is the converse of the phenomena you describe, you’d get a better reception?
I’m thinking it’s because someone appears to have corrupted DNS for Shirky’s site for US readers … I’ve put up a copy myself here.
I’m not sure it’s the same thing as evaporative cooling. At this point I want a clueful sociologist on hand.
Evaporative cooling is change to average belief from old members leaving.
Your article is about change to average belief from new members joining.
Sounds plausibly related, and well spotted … but it’s not obvious to me how they’re functionally converses in practice to the degree that you could talk about one in place of talking about the other. This is why I want someone on hand who’s thought about it harder than I have.
(And, more appositely, the problem here is specifically a complaint about newbies.)
I wasn’t suggesting that one replaced the other, but that one was conceptually useful in thinking about the other.
Definitely useful, yes. I wonder if anyone’s sent Shirky the evaporative cooling essay.
I don’t consider myself a particularly patient person when it comes to tolerating ignorance or stupidity but even so I don’t much mind if people here contribute without having done much background reading. What matters is that they don’t behave like an obnoxious prat about it and are interested in learning things.
I do support enforcing high standards of discussion. People who come here straight from their highschool debate club and Introduction to Philosophy 101 and start throwing around sub-lesswrong-standard rhetoric should be downvoted. Likewise for confident declarations of trivially false things. There should be more correction of errors that would probably be accepted (or even rewarded) in many other contexts. These are the kind of thing that don’t actively exclude but do have the side effect of raising the barrier to entry. A necessary sacrifice.
The core-sequence fail gets downvoted pretty reliably. I can’t say the same for metaethics or AI stuff. We need more people to read those sequences so that they can point out and downvote failure.
Isn’t the metaethics sequence not liked very much? I haven’t read it in a while, and so I’m not sure that I actually read all of the posts, but I found what I read fairly squishy, and not even on the level of, say, Nietzsche’s moral thought.
Downvoting people for not understanding that beliefs constrain expectation I’m okay with. Downvoting people for not agreeing with EY’s moral intuitions seems… mistaken.
Beliefs are only sometimes about anticipation. LessWrong repeatedly makes huge errors when they interpret “belief” in such a naive fashion;—giving LessWrong a semi-Bayesian justification for this collective failure of hermeneutics is unwise. Maybe beliefs “should” be about anticipation, but LessWrong, like everybody else, can’t reliably separate descriptive and normative claims, which is exactly why this “beliefs constrain anticipation” thing is misleading. …There’s a neat level-crossing thingy in there.
EY thinking of meta-ethics as a “solved problem” is one of the most obvious signs that he’s very spotty when it comes to philosophy and can’t really be trusted to do AI theory.
(Apologies if I come across as curmudgeonly.)
He does? I know he doesn’t take it as seriously as other knowledge required for AI but I didn’t think he actually thought it was a ‘solved problem’.
From my favorite post and comments section on Less Wrong thus far:
Yes, it looks like Eliezer is mistaken there (or speaking hyperbole).
I agree with:
… but would weaken the claim drastically to “Take metaethics, a clearly reducible problem with many technical details to be ironed out”. I suspect you would disagree with even that, given that you advocate meta-ethical sentiments that I would negatively label “Deeply Mysterious”. This places me approximately equidistant from your respective positions.
I only weakly advocate certain (not formally justified) ideas about meta-ethics, and remain deeply confused about certain meta-ethical questions that I wouldn’t characterize as mere technical details. One simple example: Eliezer equates reflective consistency (a la CEV) with alignment with the big blob of computation he calls “right”; I still don’t know what argument, technical or non-technical, could justify such an intuition, and I don’t know how Eliezer would make tradeoffs if the two did in fact have different referents. This strikes me as a significant problem in itself, and there are many more problems like it.
(Mildly inebriated, apologies for errors.)
Are you sure Eliezer does equate reflective consistency with alignment with what-he-calls-”right”? Because my recollection is that he doesn’t claim either (1) that a reflectively consistent alien mind need have values at all like what he calls right, or (2) that any individual human being, if made reflectively consistent, would necessarily end up with values much like what he calls right.
(Unless I’m awfully confused, denial of (1) is an important element in his thinking.)
I think he is defining “right” to mean something along the lines of “in line with the CEV of present-day humanity”. Maybe that’s a sensible way to use the word, maybe not (for what it’s worth, I incline towards “not”) but it isn’t the same thing as identifying “right” with “reflectively consistent”, and it doesn’t lead to a risk of confusion if the two turn out to have different referents (because they can’t).
He most certainly does not.
Relevant quote from Morality as Fixed Computation:
Thanks—I hope you’re providing that as evidence for my point.
Sort of. It certainly means he doesn’t define morality as extrapolated volition. (But maybe “equate” meant something looser than that?)
Aghhhh this is so confusing. Now I’m left thinking both you and Wei Dai have furnished quotes supporting my position, User:thomblake has interpreted your quote as supporting his position, and neither User:thomblake nor User:gjm have replied to Wei Dai’s quote so I don’t know if they’d interpret it as evidence of their position too! I guess I’ll just assume I’m wrong in the meantime.
Now two people have said the exact opposite things both of which disagree with me. :( Now I don’t know how to update. I plan on re-reading the relevant stuff anyway.
If you mean me and thomblake, I don’t see how we’re saying exact opposite things, or even slightly opposite things. We do both disagree with you, though.
I guess I can interpret User:thomblake two ways, but apparently my preferred way isn’t correct. Let me rephrase what you said from memory. It was like, “right is defined as the output of something like CEV, but that doesn’t mean that individuals won’t upon reflection differ substantially”. User:thomblake seemed to be saying “Eliezer doesn’t try to equate those two or define one as the other”, not “Eliezer defines right as CEV, he doesn’t equate it with CEV”. But you think User:thomblake intended the latter? Also, have I fairly characterized your position?
I don’t know whether thomblake intended the latter, but he certainly didn’t say the former. I think you said “Eliezer said A and B”, thomblake said “No he didn’t”, and you are now saying he meant “Eliezer said neither A nor B”. I suggest that he said, or at least implied, something rather like A, and would fiercely repudiate B.
I definitely meant the latter, and I might be persuaded of the former.
Though “define” still seems like the wrong word. More like, ” ‘right’ is defined as *point at big blob of poetry*, and I expect it will be correctly found via the process of CEV.”—but that’s still off-the-cuff.
Thanks much; I’ll keep your opinion in mind while re-reading the meta-ethics sequence/CEV/CFAI. I might be being unduly uncharitable to Eliezer as a reaction to noticing that I was unduly (objectively-unjustifiably) trusting him. (This would have been a year or two ago.) (I notice that many people seem to unjustifiably disparage Eliezer’s ideas, but then again I notice that many people seem to unjustifiably anti-disparage (praise, re-confirm, spread) Eliezer’s ideas;—so I might be biased.)
(Really freaking drunk, apologies for errors, e.g. poltiically unmotivated adulation/anti-adulation, or excessive self-divulgation. (E.g., I suspect “divulgation” isn’t a word.))
Not to worry, it means “The act of divulging” or else “public awareness of science” (oddly).
I mean, it’s not so odd. di-vulgar-tion; the result of making public (something).
Well,
divulge
divulgate
divulgation
But yeah, I just find it odd that it’s a couple of steps removed from the obvious usage. I ask myself, “Why science specifically?” and “Why public awareness rather than making the public aware?”
If I understand you correctly then this particular example I don’t think I have a problem with, at least not when I assume the kind of disclaimers and limitations of scope that I would include if I were to attempt to formally specify such a thing.
I suspect I agree with some of your objections to various degrees.
Part of my concern about Eliezer trying to build FAI also stems from his treatment of metaethics. Here’s a caricature of how his solution looks to me:
Alice: Hey, what is the value of X?
Bob: Hmm, I don’t know. Actually I’m not even sure what it means to answer that question. What’s the definition of X?
Alice: I don’t know how to define it either.
Bob: Ok… I don’t know how to answer your question, but what if we simulate a bunch of really smart people and ask them what the value of X is?
Alice: Great idea! But what about the definition of X? I feel like we ought to be able to at least answer that now...
Bob: Oh that’s easy. Let’s just define it as the output of that computation I just mentioned.
I thought the upshot of Eliezer’s metaethics sequence was just that “right” is a fixed abstract computation, not that it’s (the output of) some particular computation that involves simulating really smart people. CEV is not even mentioned in the sequence (EDIT: whoops it is.).
(Indeed just saying that it’s a fixed abstract computation is at the right level of abstraction to qualify as metaethics; saying that it’s some particular computation would be more like just plain ethics. The upshot does feel kind of underwhelming and obvious. This might be because I just don’t remember how confusing the issue looked before I read those posts. It could also mean that Eliezer claiming that metaethics is a solved problem is not as questionable as it might seem. And it could also mean that metaethics being solved doesn’t consitute as massive progress as it might seem.)
BTW, I’ve had numerous “wow” moments with philosophical insights, some of which made me spend years considering their implications. For example:
Bayesian interpretation of probability
AI / intelligence explosion
Tegmark’s mathematical universe
anthropic principle / anthropic reasoning
free will as the ability to decide logical facts
I expect that a correct solution to metaethics would produce a similar “wow” reaction. That is, it would be obvious in retrospect, but in an overwhelming instead of underwhelming way.
Is the insight about free will and logical facts part of the sequences? or is it something you or others discuss in a post somewhere? I’d like to learn about it, but my searches failed.
I never wrote a post on it specifically, but it’s sort of implicit in my UDT post (see also this comment). Eliezer also has a free will sequence) which is somewhat similar/related but I’m not sure if he would agree with my formulation.
What is “you”? And what is “deciding”? Personally I haven’t been able to come to any redefinition of free will that makes more sense than this one.
I haven’t read the free will sequence. And I haven’t read up on decision theory because I wasn’t sure if my math education is good enough yet. But I doubt that if I was going to read it I would learn that you can salvage the notion of “deciding” from causality and logical facts. The best you can do is look at an agent and treat it is as a transformation. But then you’d still be left with the problem of identity.
(Agreed; I also think meta-ethics and ethics are tied into each other in a way that would require that a solution to meta-ethics would at least theoretically solve any ethical problems. Given that I can think of hundreds or thousands of object level ethical problems, and given that I don’t think my inability to answer at least some of them is purely due to boundedness, fallibility, self-delusion, or ignorance as such, I don’t think I have a solution to meta-ethics. (But I would characterize my belief in God as at least a belief that meta-ethics and ethical problems do at least have some unique (meta-level) solution. This might be optimistic bias, though.))
Wei Dai, have you read the Sermon on the Mount, particularly with superintelligences, Tegmark, (epistemic or moral) credit assignment, and decision theory in mind? If not I suggest it, if only for spiritual benefits. (I suggest the Douay-Rheims translation, but that might be due to a bias towards Catholics as opposed to Protestants.)
(Pretty damn drunk for the third day in a row, apologies for errors.)
Are you planning on starting a rationalist’s drinking club? A byob lesswrong meetup with one sober note-taker? You usually do things purposefully, even if they’re unusual purposes, so consistent drunkenness seems uncharacteristic unless it’s part of a plan.
Will_Newsome isn’t a rationalist. (He has described himself as a ‘post-rationalist’, which seems as good a term as any.)
(FWIW the “post-rationalist” label isn’t my invention, I think it mostly belongs to the somewhat separate Will Ryan / Nick Tarleton / Michael Vassar / Divia / &c. crowd; I agree with Nick and Vassar way more than I agree with the LessWrong gestalt, but I’m still off on my own plot of land. Jennifer Rodriguez-Mueller could be described similarly.)
I’m pretty sure the term “rationalist’s drinking club” wouldn’t be used ingenuously as a self-description. I have noticed the justifiable use of “post-rationalist” and distance from the LW gestalt, though. I think if there were a site centered around a sequence written by Steve Rayhawk with the kind of insights into other people’s minds he regularly writes out here, with Sark and a few others as heavy contributors, that would be a “more agenty less wrong” Will would endorse. I’d actually like to see that, too.
In vino veritas et sanitas!
It’s mentioned here:
ETA: Just in case you’re right and Eliezer somehow meant for that paragraph not to be part of his metaethics, and that his actual metaethics is just “morality is a fixed abstract computation”, then I’d ask, “If morality is a fixed abstract computation, then it seems that rationality must also be a fixed abstract computation. But don’t you think a complete “solved” metaethics should explain how morality differs from rationality?”
Rationality computation outputs statements about the world, morality evaluates them. Rationality is universal and objective, so it is unique as an abstract computation, not just fixed. Morality is arbitrary.
How so? Every argument I’ve heard for why morality is arbitrary applies just as well to rationality.
If we assume some kind of mathematical realism (which seems to be necessary for “abstract computation” and “uniqueness” to have any meaning) then there exist objectively true statements and computations that generate them. At some point there are Goedelian problems, but at least all of the computations agree on the primitive-recursive truths, which are therefore universal, objective, unique, and true.
Any rational agent (optimization process) in any world with some regularities would exploit these regularities, which means use math. A reflective self-optimizing rational agent would arrive to the same math as us, because the math is unique.
Of course, all these points are made by a fallible human brain and so may be wrong.
But there is nothing even like that for morality. In fact, when a moral statement seems universal under sufficient reflection, it stops being a moral statement and becomes simply rational, like cooperating in the Prisoner’s Dilemma when playing against the right opponents.
What is the distinction you are making between rationality and morality, then? What makes you think the former won’t be swallowed up by the latter (or vice versa!) in the limit of infinite reflection?
(Sorta drunk, apologies for conflating conflation of rationality and morality with lack of conflation of rationality and morality, probabilistically-shouldly.)
ETA: I don’t understand how my comments can be so awesome when I’m obviously so freakin’ drunk. ;P . Maybe I should get drunk all the freakin’ time. Or study Latin all the freakin’ time, or read the Bible all the freakin’ time, or ponder how often people are obviously wrong when they use the phrase “all the freakin’ time” (let alone “freakin[‘]”) (especially when they use the phrase “all the freakin’ time” all the freakin’ time, naturally-because-reflexively)....
That was the distinction—one is universal, another arbitrary, in the limit of infinite reflection. I suppose, “there is nothing arbitrary” is a valid (consistent) position, but I don’t see any evidence for it.
Interesting! You seem to be a moral realist (cognitivist, whatever) and an a-theist. (I suspect this is the typical LessWrong position, even if the typical LessWronger isn’t as coherent as you.) I’ll take note that I should pester you and/or take care to pay attention to your opinions (comments) more in the future. Also, I thank you for showing me what the reasoning process would be that would lead one to that position. (And I think that position has a very good chance of being correct—in the absence of justifiably-ignorable inside-view (non-communicable) evidence I myself hold.)
(It’s probably obvious that I’m pretty damn drunk. (Interesting that alcohol can be just as effective as LSD or cannabis. (Still not as effective as nitrous oxide or DMT.)))
Cognitivist yes, moral realist, no. IIUC, it’s EY’s position (“morality is a computation”), so naturally it’s the typical LessWrong position.
Universally valid statements must have universally-available evidence, no?
Really nothing like LSD, which makes it impossible to write anything at all, at least for me.
Assuming it started with the same laws of inference and axioms. Also I was mostly thinking of statements about the world, e.g., physics.
Or equivalent ones. But no matter where it started, it won’t arrive at different primitive-recursive truths, at least according to my brain’s current understanding.
Is there significant difference? Wherever there are regularities in physics, there’s math (=study of regularities). Where no regularities exist, there’s no rationality.
What about the poor beings with an anti-iductive prior? More generally read this post by Eliezer.
I think the poor things are already dead. More generally, I am aware of that post, but is it relevant? The possible mind design space is of course huge and contains lots of irrational minds, but here I am arguing about universality of rationality.
My point, as I stated above, is that every argument I’ve heard against universality of morality applies just as well to rationality.
I agree with your statement:
I would also agree with the following:
The possible mind design space is of course huge and contains lots of immoral minds, but here I am arguing about universality of morality.
But rationality is defined by external criteria—it’s about how to win (=achieve intended goals). Morality doesn’t have any such criteria. Thus, “rational minds” is a natural category. “Moral minds” is not.
Yeah: CEV appears to just move the hard bit. Adding another layer of indirection.
To take Eliezer’s statement one meta-level down:
What did he mean by “I tried that...”?
I’m not at all sure, but I think he means CFAI.
Possibly he means this.
He may have soleved it, but if only he or someone else could say what the solution was.
Can you give examples of beliefs that aren’t about anticipation?
Beliefs about things that are outside our future light cone possibly qualify, to the extent that the beliefs don’t relate to things that leave historical footprints. If you’ll pardon an extreme and trite case, I would have a belief that the guy who flew the relativistic rocket out of my light cone did not cease to exist as he passed out of that cone and also did not get eaten by a giant space monster ten minutes after. My anticipations are not constrained by beliefs about either of those possibilities.
In both cases my inability to constrain my anticipated experiences speaks to my limited ability to experience and not a limitation of the universe. The same principles of ‘belief’ apply even though it has incidentally fallen out of the scope which I am able to influence or verify even in principle.
Beliefs that aren’t easily testable also tend to be the kind of beliefs that have a lot of political associations, and thus tend not to act like beliefs as such so much as policies. Also, even falsified beliefs tend to be summarily replaced with new untested/not-intended-to-be-tested beliefs, e.g. “communism is good” with “correctly implemented communism is good”, or “whites and blacks have equal average IQ” with “whites and blacks would have equal average IQ if they’d had the same cultural privileges/disadvantages”. (Apologies for the necessary political examples. Please don’t use this as an opportunity to talk about communism or race.)
Many “beliefs” that aren’t politically relevant—which excludes most scientific “knowledge” and much knowledge of your self, the people you know, what you want to do with your life, et cetera—are better characterized as knowledge, and not beliefs as such. The answers to questions like “do I have one hand, two hands, or three hands?” or “how do I get back to my house from my workplace?” aren’t generally beliefs so much as knowledge, and in my opinion “knowledge” is not only epistemologically but cognitively-neurologically a more accurate description, though I don’t really know enough about memory encoding to really back up that claim (though the difference is introspectively apparent). Either way, I still think that given our knowledge of the non-fundamental-ness of Bayes, we shouldn’t try too hard to stretch Bayes-ness to fit decision problems or cognitive algorithms that Bayes wasn’t meant to describe or solve, even if it’s technically possible to do so.
I believe the common to term for that mistake is “no true Scotsman”.
What do we lose by saying that doesn’t count as a belief? Some consistency when we describe how our minds manipulate anticipations (because we don’t separate out ones we can measure and ones we can’t, but reality does separate those, and our terminology fits reality)? Something else?
So if someone you cared about is leaving your future light cone, you wouldn’t care if he gets horribly tortured as soon as he’s outside of it?
I’m not clear on the relevance of caring to beliefs. I would prefer that those I care about not be tortured, but once they’re out of my future light cone whatever happens to them is a sunk cost- I don’t see what I (or they) get from my preferring or believing things about them.
Yes, but you can affect what happens to them before they leave.
Before they leave, their torture would be in my future light cone, right?
Oops, I just realized that in my hypothetical scenario by someone being tortured outside your light cone, I meant someone being tortured somewhere your two future light cones don’t intersect.
Indeed; being outside of my future light cone just means whatever I do has no impact on them. But now not only can I not impact them, but they’re also dead to me (as they, or any information they emit, won’t exist in my future). I still don’t see what impact caring about them has.
Ok, my scenario involves your actions having an effect on them before your two light cones become disjoint.
Right, but for my actions to have an effect on them, they have to be in my future light cone at the time of action. It sounds like you’re interested in events in my future light cone but will not be in any of the past light cones centered at my future intervals- like, for example, things that I can set in motion now which will not come to fruition until after I’m dead, or the person I care about pondering whether or not to jump into a black hole. Those things are worth caring about so long as they’re in my future light cone, and it’s meaningful to have beliefs about them to the degree that they could be in my past light cone in the future.
The best illustration I’ve seen thus far is this one.
(Side note: I desire few things more than a community where people automatically and regularly engage in analyses like the one linked to. Such a community would actually be significantly less wrong than any community thus far seen on Earth. When LessWrong tries to engage in causal analyses of why others believe what they believe it’s usually really bad: proffered explanations are variations on “memetic selection pressures”, “confirmation bias”, or other fully general “explanations”/rationalizations. I think this in itself is a damning critique of LessWrong, and I think some of the attitude that promotes such ignorance of the causes of others’ beliefs is apparent in posts like “Our Phyg Is Not Exclusive Enough”.)
I agree that that post is the sort of thing that I want more of on LW.
It seems to me like Steve_Rayhawk’s comment is all about anticipation- I hold position X because I anticipate it will have Y impact on the future. But I think I see the disconnect you’re talking about- the position one takes on global warming is based on anticipations one has about politics, not the climate, but it’s necessary (and/or reduces cognitive dissonance) to state the political position in terms of anticipations one has about the climate.
I don’t think public stated beliefs have to be about anticipation- but I do think that private beliefs have to be (should be?) about anticipation. I also think I’m much more sympathetic to the view that rationalizations can use the “beliefs are anticipation” argument as a weapon without finding the true anticipations in question (like Steve_Rayhawk did), but I don’t think that implies that “beliefs are anticipation” is naive or incorrect. Separating out positions, identities, and beliefs seems more helpful than overloading the world beliefs.
You seem to be modeling the AGW disputant’s decision policy as if he is internally representing, in a way that would be introspectively clear to him, his belief about AGW and his public stance about AGW as explicitly distinguished nodes;—as opposed to having “actual belief about AGW” as a latent node that isn’t introspectively accessible. That’s surely the case sometimes, but I don’t think that’s usually the case. Given the non-distinguishability of beliefs and preferences (and the theoretical non-unique-decomposability (is there a standard economic term for that?) of decision policies) I’m not sure it’s wise to use “belief” to refer to only the (in many cases unidentifiable) “actual anticipation” part of decision policies, either for others or ourselves, especially when we don’t have enough time to be abnormally reflective about the causes and purposes of others’/our “beliefs”.
(Areas where such caution isn’t as necessary are e.g. decision science modeling of simple rational agents, or largescale economic models. But if you want to model actual people’s policies in complex situations then the naive Bayesian approach (e.g. with influence diagrams) doesn’t work or is way too cumbersome. Does your experience differ from mine? You have a lot more modeling experience than I do. Also I get the impression that Steve disagrees with me at least a little bit, and his opinion is worth a lot more than mine.)
Another more theoretical reason I encourage caution about the “belief as anticipation” idea is that I don’t think it correctly characterizes the nature of belief in light of recent ideas in decision theory. To me, beliefs seem to be about coordination, where your choice of belief (e.g. expecting a squared rather than a cubed modulus Born rule) is determined by the innate preference (drilled into you by ecological contingencies and natural selection) to coordinate your actions with the actions and decision policies of the agents around you, and where your utility function is about self-coordination (e.g. for purposes of dynamic consistency). The ‘pure’ “anticipation” aspect of beliefs only seems relevant in certain cases, e.g. when you don’t have “anthropic” uncertainty (e.g. uncertainty about the extent to which your contexts are ambiently determined by your decision policy). Unfortunately people like me always have a substantial amount of “anthropic” uncertainty, and it’s mostly only in counterfactual/toy problems where I can use the naive Bayesian approach to epistemology.
(Note that taking the general decision theoretic perspective doesn’t lead to wacky quantum-suicide-like implications, otherwise I would be a lot more skeptical about the prudence of partially ditching the Bayesian boat.)
I’m describing it that way but I don’t think the introspection is necessary- it’s just easier to talk about as if he had full access to his mind. (Private beliefs don’t have to be beliefs that the mind’s narrator has access to, and oftentimes are kept out of its reach for security purposes!)
I don’t think I’ve seen any Bayesian modeling of that sort of thing, but I haven’t gone looking for it.
Bayes nets in general are difficult for people, rather than computers, to manipulate, and so it’s hard to decide what makes them too cumbersome. (Bayes nets in industrial use, like for fault diagnostics, tend to have hundreds if not thousands of nodes, but you wouldn’t have a person traverse them unaided.)
If you wanted to code a narrow AI that determined someone’s mood by, say, webcam footage of them, I think putting your perception data into a Bayes net would be a common approach.
Political positions / psychology seem tough. I could see someone do belief-mapping and correlation in a useful way, but I don’t see analysis on the level of Steve_Rayhawk’s post coming out of a computer-run Bayes net anytime soon, and I don’t think drawing out a Bayes net would help significantly with that sort of analysis. Possible but unlikely- we’ve got pretty sophisticated dedicated hardware for very similar things.
Hmm. I’m going to need to sleep on this, but this sort of coordination still smells to me like anticipation.
(A general comment: this conversation has moved me towards thinking that it’s useful for the LW norm to be tabooing “belief” and using “anticipation” instead when appropriate, rather than trying to equate the two terms. I don’t know if you’re advocating for tabooing “belief”, though.)
(Complement to my other reply: You might not have seen this comment, where I suggest “knowledge” as a better descriptor than “belief” in most mundane settings. (Also I suspect that people’s uses of the words “think” versus “believe” are correlated with introspectively distinct kinds of uncertainty.))
Beliefs about primordial cows, etc. Most people’s beliefs. He’s talking descriptively, not normatively.
Don’t my beliefs about primordial cows constrain my anticipation of the fossil record and development of contemporary species?
I think “most people’s beliefs” fit the anticipation framework- so long as you express them in a compartmentalized fashion, and my understanding of the point of the ‘belief=anticipation’ approach is that it helps resist compartmentalization, which is generally positive.
Metaethics sequence is a bit of a mess, but the point it made is important, and it doesn’t seem like it’s just some wierd opinion of Eliezer’s.
After I read it I was like, “Oh, ok. Morality is easy. Just do the right thing. Where ‘right’ is some incredibly complex set of preferences that are only represented implicitly in physical human brains. And it’s OK that it’s not supernatural or ‘objective’, and we don’t have to ‘justify’ it to an ideal philosophy student of perfect emptyness”. Fake utility functions, and Recursive justification stuff helped.
Maybe there’s something wrong with Eliezer’s metaethics, but I havn’t seen anyone point it out, and have no reason to suspect it. Most of the material that contradicts it is obvious mistakes from just not having read and understood the sequences, not an enlightened counter-analysis.
Hm. I think I’ll put on my project list “reread the metaethics sequence and create an intelligent reply.” If that happens, it’ll be at least two months out.
I look forward to that.
Has it ever been demonstrated that there is a consensus on what point he was trying to make, and that he in fact demonstrated it?
He seems to make a conclusion, but I don’t believe demonstrated it, and I never got the sense that he carried the day in the peanut gallery.
Try actually applying it to some real life situations and you’ll quickly discover the problems with it.
There’s a difference between a metaethics and an ethical theory.
The metaethics sequence is supposed to help dissolve the false dichotomy “either there’s a metaphysical, human-independent Source Of Morality, or else the nihilists/moral relativists are right”. It’s not immediately supposed to solve “So, should we push a fat man off the bridge to stop a runaway trolley before it runs over five people?”
For the second question, we’d want to add an Ethics Sequence (in my opinion, Yvain’s Consquentialism FAQ lays some good groundwork for one).
such as?
Well, for starters determining whether something is a preference or a bias is rather arbitrary in practice.
I struggled with that myself, but then figured out a rather nice quantitative solution.
Eliezer’s stuff doesn’t say much about that topic, but that doesn’t mean it fails at it.
I don’t think your solution actually resolves things since you still need to figure out what weights to assign to each of your biases/values.
You mean that it’s not something that I could use to write an explicit utility function? Of course.
Beyond that, whatever weight all my various concerns have is handled by built-in algorithms. I just have to do the right thing.
The main problem I have is that it is grossly incomplete. There are a few foundational posts but it cuts off without covering what I would like to be covered.
What would you like covered? Or is it just that vague “this isn’t enough” feeling?
I can’t fully remember—it’s been a while since I considered the topic so I mostly have the cached conclusion. More on preference aggregation is one thing. A ‘preferences are subjectively objective’ post. A post that explains more completely what he means by ‘should’ (he has discussed and argued about this in comments).
It’s much worse than that. Nobody on LW seems to be able to understand it at all.
Nah. Subjectivism. Euthyphro.
Random factoid: The post by Eliezer that I find most useful for describing (a particular aspect of) moral philosophy is actually a post about probability.
That is an excellent point.
(In general I use most of the same intuitions for values as I do for probability; they share a lot of the same structure, and given the oft-remarked-on non-unique-decomposability of decision policies they seem to be special cases of some more fundamental thing that we don’t yet have a satisfactory language for talking about. You might like this post and similar posts by Wei Dai that highlight the similarities between beliefs and values. (BTW, that post alone gets you half the way to my variant of theism.) Also check out this post by Nesov. (One question that intrigues me: is there a nonlinearity that results in non-boring outputs if you have an agent who calculates the expected utility of an action by dividing the universal prior probability of A by the universal prior probability of A (i.e., unity)? (The reason you might expect nonlinearities is that some actions depend on the output of the agent program itself, which is encoded by the universal prior but is undetermined until the agent fills in the blank. Seems to be a decent illustration of the more general timeful/timeless problem.)))
I think you mean that it would get you halfway there. Do you have good reason to think it would do the same for others who aren’t already convinced? (It seems like there could be non-question-begging reasons to think that—e.g., it might turn out that people who’ve read and understood it quite commonly end up agreeing with you about God.)
I think most of the disagreement would be about the use of the “God” label, not about the actual decision theory. Wei Dai asks:
This is very close to my variant of theism / objective morality, and gets you to the First and Final Cause of morality—the rest is discerning the attributes of said Cause, which we can do to some extent with algorithmic information theory, specifically the properties of Chaitin’s number of wisdom, omega. I think I could argue quite forcefully that my God is the same God as the God of Aquinas and especially Leibniz (who was in his time already groping towards algorithmic information theory himself). Thus far the counterarguments I’ve seen amount to: “Their ‘language’ doesn’t mean anything; if it does mean something then it doesn’t mean what you think it means; if it does mean what you think it means then you’re both wrong, traitor.” I strongly suspect rationalization due to irrational allergies to the “God” word; most people who think that theism is stupid and worthless have very little understanding of what theology actually is. This is pretty much unrelated to the actual contents of my ideas about ethics and decision theory, it’s just a debate about labels.
Anyway what I meant wasn’t that reading the post halfway convinces the attentive reader of my variant of theism, I meant it allows the attentive reader to halfway understand why I have the intuitions I do, whether or not the reader agrees with those intuitions.
(Apologies if I sound curmudgeonly, really stressed lately.)
Will, may I suggest that you try to work out the details of your objective morality first and explain it to us before linking it with theism/God? For example, how are we supposed to use Chaitin’s Omega to “discerning the attributes of said Cause”? I really have no idea at all what you mean by that, but it seems like it would make for a more interesting discussion than whether your God is the same God as the God of Aquinas and Leibniz, and also less likely to trigger people’s “allergies”.
Actually for the last few days I’ve been thinking about emailing you, because I’ve been planning on writing a long exegesis explaining my ideas about decision theory and theology, but you’ve stated that you don’t think it’s generally useful to proselytize about your intuitions before you have solid formally justified results that resulted from your intuitions. Although I’ve independently noticed various ideas about decision theory (probably due to Steve’s influence), I haven’t at all contributed any new insights, and the only thing I would accomplish with my apologetics is to convince other people that I’m not obviously crazy. You, Nesov, and Steve have made comments that indicate that you recognize that various of my intuitions might be correct, but of course that in itself isn’t anything noteworthy: it doesn’t help us build FAI. (Speaking of which, do you have any ideas about a better name than “FAI”? ‘Friendliness’ implies “friendly to humans”, which itself is a value judgment. Justified Artificial Intelligence, maybe? Not Regrettable Artificial Intelligence? I was using “computational axiology” for awhile a few years ago, but if there’s not a fundamental distinction between epistemology and axiology then that too is sort of misleading.)
Now, I personally think that certain results about decision theory should actually affect what we think of as morally justified, and thus I think my intuitions are actually important for not being damned (whatever that means). But I could easily be wrong about that.
The reason I’ve made references to theology is actually a matter of epistemology, not decision theory: I think LessWrong and others’ contempt for theology and other kinds of academic philosophy is unjustified and epistemically poisonous. (Needless to say, I am extremely skeptical of arguments along the lines of “we only have so much time, we can’t check out every crackpot thesis that comes our way”: in my experience such arguments are always, without exception the result of motivated cognition.) I would hold this position about normative epistemology even if my intuitions about decision theory didn’t happen to support various theological hypotheses.
Anyway, my default position is to write up the aforementioned exegesis in Latin; that way only people that already give my opinions a substantial amount of weight will bother to read it, and I won’t be seen as unfairly proselytizing about my own justifiably-ignorable ideas.
(I’m pretty drunk right now, apologies for errors. I might respond to your comment again when I’m sober.)
OK, so now you’re just taking the piss.
Writing it in Latin selects to some extent for people who respect your opinions, but more strongly for people who happen to know quite a lot of Latin. It sounds as if what you actually want is to be able to say you’ve written up your position, without anyone actually reading it. I hope that isn’t really what you actually want.
(I’m pretty stupid; apologies for any mistakes I make.)
(Part of this stems from my looking for an excuse to manipulate myself into learning Latin. Thus far I’ve used a hot Catholic chick and a perceived moral obligation to express myself incoherently—a quite potent combination.)
That actually sounds a lot like me. Could be true. Yay double negative moral obligations—they force us to be coherent on a higher level, and about more important things!
I will generally explain my intuitions but try not to waste too much time arguing for them if other people do not agree. So I think if you have any ideas that you have not already clearly explained, then you should do so. (And please, not in Latin.)
How about Minimally Wrong AI? :)
Making off-hand references to theology is not going to change our minds about this. Do you have an actual plan to do so? If not, you’re just wasting your credibility and make it less likely for us to take your other ideas seriously.
(Side note: This self-sabotage is purposeful, for reasons indicated by, e.g., this post.)
Okay, thanks for the advice. I haven’t yet clearly explained most of my ideas. (Hm, “my” ideas?—I doubt any of them are actually “mine”.) Not sure I want to do so (hence the Latin), but it sort of seems like a moral imperative, so I guess I have to. bleh bleh bleh
I’ve debated the meta-level issue of epistemic “charity” and how much importance we should assign it in our decision calculi a few times on LessWrong before, e.g. in a few debates with Nesov. You were involved in at least one of them. I think what eventually happened is that I became afraid I was committing typical mind fallacy in advocating a sort of devil-may-care attitude to looking at weird or low-status beliefs; Nesov claimed that doing so had been harmful to him in the past, so I decided I’d rather collect more data before pushing my epistemic intuitions. Unfortunately I don’t know of an easy way to collect more data, so I’ve sort of stalled out on that particular campaign. The making references to theism thing is a sort of middleground position I’ve taken up, presumably to escape various aversions that I don’t have immediate introspective access to. There’s also the matter of not going out of my way to not appear discreditable.
FWIW, I think this “middleground position” is the worst of both worlds.
Your comments have made me wonder if I’ve been too creditable, i.e., to the extent of making people take my ideas more seriously than they should. But it seems like a valid Umeshism that if there isn’t at least one person who has taken your ideas too seriously, then you’re not being creditable enough. I may be close to (or past) this threshold already, but you seem to still have quite a long way to go, so I suggest not worrying about this right now. Especially since credibility is much harder to gain than to lose, so if you ever find yourself having too much credibility, it shouldn’t be too late to do something about it then.
Your comment seems to me to be modally implicitly self-contradictory. For you say that you are worried that you’ve caused yourself to be too creditable, and yet the reason you are considering that hypothesis is that I, a mere peasant, have implicitly-suggested-if-only-categorically that that might be the case. If I am wrong to doubt the wisdom of my self-doubting, then by your lights I am right, and not right to do so! You’ve taken me seriously enough to doubt yourself—to some extent this implies that I have impressed my self too strongly upon you, for you and I and everyone else thinks that you are more justified than I. Again, modally—not necessarily self-contradictory, but it leans that way, at least connotationally-implicitly.
(Really quite drunk, again, apologies for errors, again.)
Damn it, why am I giving you advice on the proper level of credibility, when I should be telling you to stop drinking so much? Talk about cached selves...
It’s okay, I ran out of rum. But now I’m left with an existential question: Why is the rum gone?
Apologies in advance for the emotivist interpretation of morality espoused by this comment.
Yay!
Boo.
YAAAAAY!
Boo.
I may well be being obtuse, but it seems to me that there’s something very odd about the phrase “theism / objective morality”, with its suggestion that basically the two are the same thing.
Have you actually argued forcefully that your god is also Aquinas’s and Leibniz’s? I ask because first you say you could, which kinda suggests you haven’t actually done it so far (at least not in public), but then you start talking about “counterarguments”, which kinda suggests that you have and people have responded.
I agree with Wei_Dai that it might be interesting to know more about your version of objective morality and how one goes about discerning the attributes of its alleged cause using algorithmic information theory.
This reflects a confusion I have about how popular philosophical opinion is in favor of moral realism, yet against theism. It seems that getting the correct answer to all possible moral problems would require prodigious intelligence, and so I don’t really understand the conjunct of moral realism and atheism. This likely reflects my ignorance of the existent philosophical literature, though to be honest like most LessWrongers I’m a little skeptical of the worth of the average philosopher’s opinion, especially about subjects outside of his specialty. Also if I averaged philosophical opinion over, say, the last 500 years, then I think theism would beat atheism. Also, there’s the algorithm from music appreciation, which is like “look at what good musicians like”, which I think would strongly favor theism. Still, I admit I’m confused.
I’ve kinda argued it on the meta-level, i.e. I’ve argued about when it is or isn’t appropriate to assume that you’re actually referring to the same concept versus just engaging in syncretism. But IIRC I haven’t yet forcefully argued that my god is Leibniz’s God. So, yeah, it’s a mixture.
I replied to Wei Dai’s comment here.
BTW, realistically, I won’t be able to reply to your comment re CEV/rightness, though as a result of your comment I do plan on re-reading the meta-ethics sequence to see if “right” is anywhere (implicitly or explicitly) defined as CEV.
(Inebriated, apologies for errors or omissions.)
(nods) Very likely. To the extent that this technique is useful for rank-ordering philosophical positions I ought to adopt, I can also use it to rank-order various theological positions to determine which particular theology to adopt. (I’ve never done this, but I predict it’s one that endorses literacy.)
Surely typical moral realists, atheist or otherwise, don’t believe that they’ve got the correct answer to all possible moral problems. (Just as no one thinks they’re factually correct about everything.)
I don’t think “averaged philosophical opinion” is likely to have much value. Nor “averaged opinion of good musicians” when you’re talking about something that isn’t primarily musical, especially when you average over a period for much of which (e.g.) many of the best employment opportunities for musicians were working for religious organizations.
(Human with a finite brain; apologies for errors or omissions.)
Apparently I mis-stated something. I’m a little too spent to fully rectify the situation, so here’s some word salad: moral realism implies belief in a Form of the Good, but ISTM that the Form of the Good has to be personal, because only intelligences can solve moral problems; specifically, I think a true Form of the Good has to be a superintelligence, i.e. a god, who, if the god is also the Form of the Good, we call God. ISTM that belief in a Form of the Good that isn’t personal is an obvious error that any decent moral philosopher should recognize, and so I think there must be something wrong with how I’m formulating the problem or with how I’m conceptualizing others’ representation of the problem.
Point taken. There is certainly a lack along those lines.
Thanks.
I was going to include something along those lines, but then I didn’t. But really, if you haven’t read the sequences, and don’t care to, the only thing that seperates LW from r/atheism, rationalwiki, whatever that place is you linked to, and so on is that a lot of people here have read the sequences, which isn’t a fair reason to hang out here.
My recent post explains how to get true beliefs in situations like the anthropic trilemma, which post begins with the words “speaking of problems I don’t know how to solve.”
However, there is a bit of a remaining problem, since I don’t know how to model the wrong way of doing things (naive application of Bayes’ rule to questionable interpretations). well enough to tell whether it’s fixable or not, so although the problem is solved, it is not dissolved.
I quietly downvoted your post when you made it for its annoying style and because I didn’t think it really solved any problems, just asserted that it did.
What could I do to improve the style of my writing?
How do you measure “progress”, exactly ? I’m not sure what the word means in this context.
Yes, this needs clarification. Is it “I like it better/I don’t like it better” or something a third party can at least see?
Where, specifically, do you not see progress? I see much better recognition of, say, regression to the mean here than in the general population, despite it never being covered in the sequences.
This is a very interesting question. I cannot cite a lack of something. But maybe what I’m saying here will be obvious-in-retrospect if I put it like this:
This post is terrible. But some of the comments pointing out its mistakes are great. On the other hand, it’s easier to point out the mistakes in other people’s posts than to be right yourself. Where are the new posts saying thunderously correct things, rather than mediocre posts with great comments pointing out what’s wrong with them?
That terrible post is hardly an example of a newbie problem—it’s clearly a one-off by someone who read one post and isn’t interested in anything else about the site, but was sufficiently angry to create a login and post.
That is, it’s genuine feedback from the outside world. As such, trying really hard to eliminate this sort of post strikes me as something you should be cautious about.
Also, such posts are rare.
I’m insulted (not in an emotional way! I just want to state my strong personal objection!). Many of us challenge the notion of “progress” being possible or even desirable on topics like Torture vs Specks. And while I’ve still much to learn, there are people like Konkvistador, who’s IMO quite adept at resisting the lure of naive utilitarianism and can put a “small-c conservative” (meaning not ideologically conservative, but technically so) approach to metaethics to good use.
Oh I would agree progress here is questionable, but I agree with Grognor in the sense of LessWrong, at least in top level posts isn’t as intellectually productive as it could be. Worse it seems to be a rather closed thing, unwilling to update on information from outside.
Demanding that people read tomes of text before you’re willing to talk to them seems about the easiest way imaginable to silence any possible dissent. Anyone who disagrees with you won’t bother to read your holy books, and anyone who hasn’t will be peremptorily ignored. You’re engaging in a pretty basic logical fallacy in an attempt to preserve rationality. Engage the argument, not the arguer.
Expecting your interlocutors to have a passing familiarity with the subject under discussion is not a logical fallacy.
There’s ways to have a passing familiarity with rational debate that don’t involve reading a million words of Eliezer Yudkowsky’s writings.
That has nothing to do with whether or not something you believe to be a logical fallacy is or is not.
The fallacy I was referring to is basic ad hominem—“You haven’t read X, therefore I can safely ignore your argument”. It says nothing about the validity of the argument, and everything about an arbitrary set of demands you’ve made.
I’m all for demanding a high standard of discussion, and of asking for people to argue rationally. But I don’t care about how someone goes about doing so—a good argument from someone who hasn’t read the sequences is still better than a bad argument from someone who has. You’re advocating a highly restrictive filter, and it’s one that I suspect will not be nearly as effective as you want it to be(other than at creating an echo chamber, of course).
We’re more or less talking past each other at this point. I haven’t advocated for a highly restrictive filter, nor have I made any arbitrary demands on my interlocutors—rather, very specific ones. Very well.
I can’t say for certain that that situation doesn’t occur, but I haven’t seen it in recent memory. The situation I’ve seen more frequently runs like this:
A: “Here’s an argument showing why this post is wrong.”
B: “EY covers this objection here in this [linked sequence].”
A: “Oh, I haven’t read the sequences.”
At this point, I would say B is justified in ignoring A’s argument, and doing so doesn’t constitute a logical fallacy.
Who is this supposed to surprise? It’s a forum, for god’s sake. We shouldn’t be going into this with an expectation of changing the world. Producing a genuinely good fanfic already puts it ahead of 99% of the forums out there.
Why shouldn’t we have that as our standard?
Because I know of no better way to become cynical, burnt-out, and hateful than to go into a project expecting to change the world. The world is very big, and it doesn’t give a damn about you. Unless you have extraordinary resources(I’d put the lower bound at “in control of a nontrivial country”), your probability of success is minuscule. Sure, it’s fun to think about what would happen if you get lucky and achieve great fame and success, but it’s like thinking about what you’re going to do with your lottery winnings. Dream if you like, but you’re a fool if you expect or demand it.
That isn’t what Grognor was talking about. He was talking about making some sort of progress in the information content in the comments made on the forum itself. That more or less should be expected from any group of humans with significant interest in learning.
Lesswrong didn’t produce a fanfic. Eliezer Yudkowsky produced a fanfic at a time where he very seldom participated on lesswrong. (Luminosity is perhaps a step closer.)
Because people on LW told me to do it, because I comment more often, or for some other reason?
Did they? Mostly because you at that time commented more often and also wrote a “Luminosity” sequence at a similar time. (It’s a small step but it does seem that fame aside the affiliation is slightly closer.)
Yeah, they did.
Then I misinterpreted. Apologies. Nonetheless, this is a constant feature of open groups—you can educate people, but new members come uneducated, and that keeps the average fairly static. The alternative is to wall it off, of course, so I guess I can see why that’s what’s being advocated, but that seems like a deeply foolish approach.
That part was a joke ;)
Its express purpose is to change the world. Whether a forum is a suitable vehicle for this is a separate question. (The rationality org is much more clear on that as its purpose, and this gave rise to that …)
And everyone who buys a lottery ticket does so for the express purpose of winning millions of dollars.
I think you want it more tiered/topic’ed, not more exclusive, which I would certainly support. Unfortunately, the site design is not a priority.
Yeah. Having seperate “elite rationality club” and “casual rationality discussion” areas is probably my prefered solution.
Too bad everyone who cares doesn’t care enough to hack the code. How hard would it be for someone to create and send in a patch for something like exclusive back room discussion. It would be just adding another subreddit, no?
/r/evenlesswrong
I’ve seen this suggested before, and while it would have positive aspects, from a PR perspective, it would be an utter nightmare. I’ve been here for slightly less than a year, after being referred to HPMOR. I am very unlikely (Prior p = 0.02, given that EY started it and I was obsessed with HPMOR, probably closer to p = 0.07) to have ever followed a forum/blog that had an “exclusive members” section. Insomuch as LW is interested in recruiting potential rationalists, this is a horrible, horrible idea.
A more realistic idea (what I think the grandparent was suggesting) it just to try to filter off discussions not strictly related to rationality (HPMOR; fiction threads; the AGI/SIAI discussions; etc) into the discussion forum, and to stick stuff strictly related to rationality (or relevant paper-sharing, or whatever) in another subreddit.
The obvious alternative is to create a tier below discussion, which would attract some of the lower-quality discussion posts and thereby improve the signal in the main discussion section.
Or topical discussion boards...
good point.
Would you prefer that we be a bit more hardcore about subscribing sequences, and a bit more explicit that people should read them? Or maybe we should have little markers next to people’s names “this guy has read everything”? Or maybe we should do nothing because all moves are bad? Maybe a more loosely affilated site that has the strict standard instead of just a “non-noobs” section?
In short, no. See my other comment for details. I think the barriers to entry are high enough, and raising them further filters out people we might want.
This introduces status problems, it’s impossible (or at least inefficient) to enforce well.
I won’t claim that the current design we’ve located in LessWrongspace is the most optimal, but I’m quite happy with it, and I don’t see any way to immediately improve it in the regard you want.
Actually, I’ll take that back. I would like to see the community encourage the use of tags a lot more. I think if everyone was very dedicated in trying to use the tagging system it might help the problem you’re referring to. But in some way, I think those tags also need to be incorporated into titles of discussion posts. I really like the custom of using [META] or [LINK] in the title, and I’d like to see that expand.
Again no, really for the same reasons as above.
If that happened, unless it was very carefully presented, I would expect a drop of quality in the discussion section because “hey cool down man we aren’t in the restricted section here”, and because many old-timers might stop spending time there altogether.
I would rather see a solution that didn’t include such a neat division; where lower standards were treated as the exception (like in HPMoR threads and open threads), not the rule.
The approach of seeding another forum or few may help, c.f. the Center for Modern Rationality.
If someone’s going to alter this forum software, how about they put a priority on giving me a list of replies to my posts, so I don’t have to re-scan long comment threads regularly in order to carry on a discussion? Also, making long threads actually load when I hit “Show all” would be nice too.
I’m not sure what you mean here- how would this differ from the inbox?
Definitely.
Is there a way to get the inbox to show responses to a post, as well as to comments? That’s not the default behavior.
This would be an excellent improvement to the site, and would solve the problem of people (cough lukeprog cough) not reading the comments on their posts.
Just to be clear, I’m not requesting it, I just thought pedanterrific was indicating that it was current behavior.
It doesn’t work with LW’s inbox, but you can subscribe to threads’ RSS feeds.
Oh. (Have never made a post.)
That feature exists? Where?
Um. I was referring to the little envelope symbol underneath your karma score.
I hadn’t noticed that before. Thank you.
Edit: Though it’s seriously annoying to not have the ability to reply from there, or even link directly to it. I’m actually pining for Disqus now, and that’s a first.
I know you say that you don’t want to end up with “ignore any discussion that contains contradictions of the lesswrong scriptures”, but it sounds a bit like that. (In particular, referring to stuff like “properly sequenced LWers” suggests to me that you not only think that the sequences are interesting, but actually right about everything). The sequences are not scripture, and I think (hope!) there are a lot of LWers who disagree to a greater or lesser degree with them.
For example, I think the metaethics sequence is pretty hopeless (WARNING: Opinion based on when I last read it, which was over a year ago). Fortunately, I don’t think much of the discussion here has actually hinged upon Eliezer’s metaethics, so I don’t think that’s actually too much of an issue.
I’m not even that worried about a convinced Yudkowsky disciple “righteously wielding the banhammer”; I suspect people making intelligent points wouldn’t get banned, but you seem to be suggesting that they should be ignored.
Perhaps a more constructive approach would just be to list any of the particularly salient assumptions you’re making at the start of the post? e.g. “This post assumes the Metaethics sequence; if you disagree with that, go argue about it somewhere else”
Eliezer considers the metaethics sequences to be a failed explanation, something most people who have read it agree with, so you’re not alone.
What if users were expected to have a passing familiarity with the topics the sequences covered, but not necessarily to have read them? That way, if they were going to post about one of the topics covered in the sequences, they could be sure to brush up on the state of the debate first.
If you’ve found some substantially easier way to become reasonably competent—i.e., possessing a saving throw vs. failing at thinking about thinking—in a way that doesn’t require reading a substantial fraction of the sequences, you’re remiss for not describing such a path publicly.
I would guess that hanging out with friends who are aspiring rationalists is a faster way to become rational than reading the sequences.
In any case, it seems pretty clear to me that the sequences do not have a monopoly on rationality. Eliezer isn’t the only person in the world who’s good at thinking about his thinking.
FWIW, I was thinking along the lines of only requesting passing familiarity with non-core sequences.
I read A Human’s Guide to Words and Reductionism, and a little bit of the rest. I at least feel like I have pretty good familiarity with the rest of the topics covered as a result of having a strong technical background. The path is pretty clear, though perhaps harder to take—just take college-level classes in mathematics, econ, and physics, and think a lot about the material. And talk to other smart people.
I haven’t read most of the sequences yet and agree with most of what those lw members are saying of who you’d like to see more of.
Most of the criticisms I voice are actually rephrased and forwarded arguments and ideas from people much smarter and more impressive than me. Including big names like Douglas Hofstadter. Quite a few of them have read all of the sequences too.
Here is an example from yesterday. I told an AI researcher about a comment made on lw (don’t worry possible negative influence, they are already well aware of everything and has read the sequences). Here is part of the reply:
...
I would usually rephrase this at some point and post it as a reply.
And this is just one of many people who simply don’t bother to get into incredible exhausting debates with a lesswrong mob.
Without me your impression that everyone agrees with you would be even worse. And by making this community even more exclusive you will get even more out of touch with reality.
It is relatively easy to believe that the only people who would criticize your beloved beliefs are some idiots like me who haven’t even read your scriptures. Guess again!
UDT can be seen as just this. It was partly inspired/influenced by AIXI anyway, if not exactly an extension of it. Edit: It doesn’t incorporate a notion of friendliness yet, but is structured so that unlike AIXI, at least in principle such a notion could be incorporated. See the last paragraph of Towards a New Decision Theory for some idea of how to do this.
That post is part of the reason I made this post. Shit like this from the OP there:
!!!
I don’t expect that if everyone made more of an effort to be more deeply familiar with the LW materials that there would be no disagreement with them. There is and would be much more interesting disagreement, and a lot less of the default mistakes.
Um, you seem to me to be saying that someone (davidad) who is in fact familiar with the sequences, and who left AI to achieve things well past most of LW’s participants, are a perfect example of who you don’t want here. Is that really what you meant to put across?
Can you provide some examples of interesting disagreement with the LW materials that was acknowledged as such by those who wrote the content or believe that it is correct?
My default stance has always been that people disagree, even when informed, otherwise I’d have a lot more organizations and communities to choose from, and there’d be no way I could make it into SI’s top donor list.
When EY are writing the sequences what percentage of population he was hoping to influence? I suppose a lot. Then now some people are bothered because the message began to spread and in the meantime the quality of posts are not the same. Well, if the discussion become poor, go to another places. High technical guys simple don’t get involve in something they see is hopeless or not interesting, like trying to turn people more rational or reduce x-risks.
First they came for the professional philosophers,
and I didn’t speak out because I wasn’t a professional philosopher.
Then they came for the frequentists,
and I didn’t speak out because I wasn’t a frequentist.
Then they came for the AI skeptics,
and I didn’t speak out because I wasn’t skeptical of AI.
and then there was no one left to talk to.
“These guys are cultish and they know it, as evidenced by the fact that they’re censoring the word ‘cult’ on their site”