Less Wrong: Open Thread, December 2010
Even with the discussion section, there are ideas or questions too short or inchoate to be worth a post.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
- 15 Dec 2010 19:20 UTC; 5 points) 's comment on Open Thread: December 2009 by (
- 2 Feb 2011 0:23 UTC; 3 points) 's comment on The Urgent Meta-Ethics of Friendly Artificial Intelligence by (
Is it just me or is Overcoming Bias almost reaching the point of self-parody with recent posts like http://www.overcomingbias.com/2010/12/renew-forager-law.html ?
It’s interesting as a “just for fun” idea. On some blogs it would probably be fine, but OB used to feel a lot more rigorous, important, and multiple levels above me, than it does now.
Did Hanson’s posts specifically feel that way? OB used to be a group blog.
I think Robin is playing at something like fourth- or fifth-order contrarian at this point.
There’s a word for doing that on the Internet. I wonder what that word is.
For some posts, only occasionally, I wonder if Robin Hanson is intentionally messing with the LessWrong crowd that’s still over there, posting the most plausible stuff he can think of that’s.… well… wrong.
But this is simply wild mass guessing on my part.
I’m nowhere near as conversant with OB’s back catalog as I am with LW’s, which might skew my interpretation, but on reading Gentle Silent Rape I did suspect I was being, well, trolled.
It might be trolling, but Hansen has a history of trying to figure out scenarios where whether women actually want sex is irrelevant.
If it were sincere, it would be a sterling example of the dangers of proceeding with no feedback.
Reaching?
Naah.
Yes, that post neglects to mention an obvious fact that makes it come off as hysterical and creepy/potentially dangerous. However, the lesser point that sex-‘starved’ people (especially men) are unfortunately Acceptable Targets, even though sexual deprivation can be a significant emotional harm, seems true and important.
(It seems to me that people vary a lot in how much they suffer when sexually deprived, and the typical mind fallacy is rampant in both directions, though probably more problematic coming from the low sufferers. As a low sufferer myself, this is not a personal complaint.)
Coincidence or name-drop?
(A fine suggestion, in any case, though “List of cognitive biases” would also be a good one to have on the list.)
(I have no idea whether the following is of any interest to anyone on LW. I wrote it mostly to clarify my own confusions, then polished it a bit out of habit. If at least a few folks think it’s potentially interesting, I’ll finish cleaning it up and post it for real.)
I’ve been thinking for a while about the distinction between instrumental and terminal values, because the places it comes up in the Sequences (1) are places where I’ve bogged down in reading them. And I am concluding that it may be a misleading distinction.
EY presents a toy example here, and I certainly agree that failing to distinguish between (V1) “wanting chocolate” and (V2) “wanting to drive to the store” is a fallacy, and a common one, and an important one to dissolve. And the approach he takes to dissolving it is sound, as far as it goes: consider the utility attached to each outcome, consider the probability of each outcome given possible actions, then choose the actions that maximize expected utility.
But in that example, V1 and V2 aren’t just different values, they are hierarchically arranged values… V2 depends on V1, such that if their causal link is severed (e.g., driving to the store stops being a way to get chocolate) then it stops being sensible to consider V2 a goal at all. In other words, the utility of V2 is zero within this toy example, and we just take the action with the highest probability of V1 (which may incidentally involve satisfying V2, but that’s just a path, not a goal).
Of course, we know wanting chocolate isn’t a real terminal value outside of that toy example; it depends on other things. But by showing V1 as the stable root of a toy network, we suggest that in principle there are real terminal values, and a concerted philosophical effort by smart enough minds will identify them. Which dovetails with the recurring(1) idea that FAI depends on this effort because uncovering humanity’s terminal values is a necessary step along the way to implementing them, as per Fun Theory.
But just because values exist in a mutually referential network doesn’t mean they exist in a hierarchy with certain values at the root. Maybe I have (V3) wanting to marry my boyfriend and (V4) wanting to make my boyfriend happy. Here, too, these are different values, and failing to distinguish between them is a problem, and there’s a causal link that matters. But it’s not strictly hierarchical: if the causal link is severed (e.g., marrying my boyfriend isn’t a way to make him happy) I still have both goals. Worse, if the causal link is reversed (e.g., marrying my boyfriend makes him less happy, because he has V5: don’t get married), I still have both goals. Now what?
Well, one answer is to treat V3 and V4 (and V5, if present) as instrumental goals of some shared (as yet undiscovered) terminal goal (V6). But failing that, all that’s left is to work out a mutually acceptable utility distribution that is suboptimal along one or more of (V2-V5) and implement the associated actions. You can’t always get what you want. (2)
Well and good; nobody has claimed otherwise.
But, again, the Metaethics and Fun Sequences seem to depend(1) on a shared as-yet-undiscovered terminal goal that screens off the contradictions in our instrumental goals. If instead it’s instrumental links throughout the network, and what seem like terminal goals are merely those instrumental goals at the edge of whatever subset of the network we’re representing at the moment, and nothing prevents even our post-Singularity descendents from having mutually inhibitory goals… well, then maybe humanity’s values simply aren’t coherent; maybe some of our post-Singularity descendents will be varelse to one another.
So, OK… suppose we discover that, and the various tribes of humanity consequently separate. After we’re done (link)throwing up on the sand(/link)(flawed_utopia), what do we do then?
Perhaps we and our AIs need a pluralist metaethic(3), one that allows us to treat other beings who don’t share our values—including, perhaps, the (link)Babykillers and the SHFP(/link)(see SHFP story) and the (link)Pebblesorters(/link)(see pebblesorters), as well as the other tribes of post-Singularity humans—as beings whose preferences have moral weight?
=============
(1) The whole (link)meta-ethics Sequence(/link)(see meta-ethics Sequence) is shot through with the idea that compromise on instrumental values is possible given shared terminal values, even if it doesn’t seem that way at first, so humans can coexist and extracting a “coherent volition” of humanity is possible, but entities with different terminal values are varelse: there’s just no point of compatibility.
The recurring message is that any notion of compromise on terminal values is just wrongheaded, which is why the (link)SHFP’s solution to the Babykiller problem(/link)(see SHFP story) is presented as flawed, as is viewing the (link)Pebblesorters(/link)($pebblesorters) as having a notion of right and wrong deserving of moral consideration. Implementing our instrumental values can leave us (link)tragically happy(/link)(see flawed utopia), on this view, because our terminal values are the ones that really matter.
More generally, LW’s formulation of post-Singularity ethics (aka (link)Fun(/link)(see fun Sequence)) seems to depend on this distinction. The idea of a reflectively stable shared value system that can survive a radical alteration of our environment (e.g, the ability to create arbitrary numbers of systems with the same moral weight that I have, or even mere immortality) is pretty fundamental, not just for the specific Fun Theory proposed, but for any fixed notion of what humans would find valuable after such a transition. If I don’t have a stable value system in the first place, or if my stable values are fundamentally incompatible with yours, then the whole enterprise is a non-starter… and clearly our instrumental values are neither stable nor shared. So the hope that our terminal values are stable and shared is important.
This distinction also may underlie the warning against (link)messing with emotions(/link)(see emotions)… the idea seems to be that messing with emotions, unlike messing with everything else, risks affecting my terminal values. (I may be pounding that screw with my hammer, though; I’m still not confident I understand why EY thinks messing with everything else is so much safer than messing with emotions.)
(2) I feel I should clarify here that my husband and I are happily married; this is entirely a hypothetical example. Also, my officemate recently brought me chocolate without my even having to leave my cube, let alone drive anywhere. Truly, I live a blessed life.
(3) Mind you, I don’t have one handy. But the longest journey begins, not with a single step, but with the formation of the desire to get somewhere.
I came here from the pedophile discussion. This comment interests me more so I’m replying to it.
To preface, here is what I currently think: Preferences are in a hierarchy. You make a list of possible universes (branching out as a result of your actions) and choose the one you prefer the most—so I’m basically coming from VNM. The terminal value lies in which universe you choose. The instrumental stuff lies in which actions you take to get there.
So I’m reading your line of thought...
I’m not sure how this line of thought suggests that terminal values do not exist. It simply suggests that some values are terminal, while others are instrumental. To simplify, you can compress all these terminal goals into a single goal called “Fulfill my preferences”, and do utilitarian game theory from there. This need not involve arranging the preferences in any hierarchy—it only involves balancing them against each other. Speaking of multiple terminal values just decomposes whatever function you use to pick your favorite universe into multiple functions.
This seems unrelated to the surrounding points. Of course two agents can diverge—no one said that humans intrinsically shared the same preferences.
(Of course, platonic agents don’t exist, living things don’t actually have VNM preferences, etc, etc)
You might enjoy Arrow’s impossibility theorem though—it seems to relate to your concerns. (i’ts relevant for questions like: Can we compromise between multiple agents? What happens if we conceptualize one human as multiple agents?)
I’m on board with:
...treating preferences as identifying a sort order for universes.
...treating “values” and “preferences” and “goals” as more or less interchangeable terms.
...aggregating multiple goals into a single complex “fulfill my preferences (insofar as they are not mutually exclusive)” goal, at least in principle. (To the extent that we can actually do this, the fact that preferences might have hierarchical dependencies where satisfying preference A also partially satisfies preference B becomes irrelevant; all of that is factored into the complex goal. Of course, actually doing this might prove too complicated for any given computationally bounded mind,so such dependencies might still be important in practice.)
...balancing preferences against one another to create some kind of weighted aggregate in cases where they are mutually exclusive, in principle. (As above, that’s not to say in practice that all minds can actually do that. Different strategies may be appropriate for less capable minds.)
...drawing a distinction between which universe(s) I choose, on the one hand, and what steps I take to get there, on the other. (And if we want to refer to steps as “instrumental values” and universes as “terminal values”, that’s OK with me. That said, what I see people doing a lot is mis-identifying steps as universes, simply because we haven’t thought enough about the internal structure and intended results of those steps, so in practice I am skeptical of claims about “terminal values.” In practice, I treat the term as referring to instrumental values I haven’t yet thought enough about to understand in detail.)
I’m not sure that’s true. IIRC, a lot of the Fun Theory Sequence and the stuff around CEV sounded an awful lot like precisely this claim. That said, it’s been three years, and I don’t remember details. In any case, if we agree that humans don’t necessarily share the same preferences, that’s cool with me, regardless of what someone else might or might not have said.
And, yes, AIT is relevant.
== (Footnotes to the above: formatting on them got screwy)
(1) The whole meta-ethics Sequence is shot through with the idea that compromise on instrumental values is possible given shared terminal values, even if it doesn’t seem that way at first, so humans can coexist and extracting a “coherent volition” of humanity is possible, but entities with different terminal values are varelse: there’s just no point of compatibility.
The recurring message is that any notion of compromise on terminal values is just wrongheaded, which is why the SHFP’s solution to the Babykiller problem is presented as flawed, as is viewing the Pebblesorters as having a notion of right and wrong deserving of moral consideration. Implementing our instrumental values can leave us tragically happy, on this view, because our terminal values are the ones that really matter.
More generally, LW’s formulation of post-Singularity ethics (aka Fun) seems to depend on this distinction. The idea of a reflectively stable shared value system that can survive a radical alteration of our environment (e.g, the ability to create arbitrary numbers of systems with the same moral weight that I have, or even mere immortality) is pretty fundamental, not just for the specific Fun Theory proposed, but for any fixed notion of what humans would find valuable after such a transition. If I don’t have a stable value system in the first place, or if my stable values are fundamentally incompatible with yours, then the whole enterprise is a non-starter… and clearly our instrumental values are neither stable nor shared. So the hope that our terminal values are stable and shared is important.
This distinction also may underlie the warning against messing with emotions… the idea seems to be that messing with emotions, unlike messing with everything else, risks affecting my terminal values. (I may be pounding that screw with my hammer, though; I’m still not confident I understand why EY thinks messing with everything else is so much safer than messing with emotions.)
(2) I feel I should clarify here that my husband and I are happily married; this is entirely a hypothetical example. Also, my officemate recently brought me chocolate without my even having to leave my cube, let alone drive anywhere. Truly, I live a blessed life.
(3) Mind you, I don’t have one handy. But the longest journey begins, not with a single step, but with the formation of the desire to get somewhere.
Ask a Mathematician / Ask a Physicist: Q: Which is a better approach to quantum mechanics: Copenhagen or Many Worlds? to my surprise answers unequivocally in favour of Many Worlds.
Lately I keep seeing a Google Ad entitled “Quantum Many Worlds”, which promises an introduction to quantum theory, and links to the quantum physics sequence at LW. Does anyone know the story behind the ad?
http://lesswrong.com/lw/2zc/currently_buying_adwords_for_lesswrong/
Some may appreciate the ‘Chrismas’ message OkCupid sent out to atheist members:
And I’ll start things off with a question I couldn’t find a place for or a post for.
Coherent extrapolated volition. That 2004 paper sets out what it would be and why we want it, in the broadest outlines.
Has there been any progress on making this concept any more concrete since 2004? How to work out a CEV? Or even one person’s EV? I couldn’t find anything.
I’m interested because it’s an idea with obvious application even if the intelligence doing the calculation is human.
On the topic of CEV: the Wikipedia article only has primary sources and needs third-party ones.
Have you looked at the paper by Roko and Nick, published this year?
No, I hadn’t found that one. Thank you!
Though it still doesn’t answer the question—it just states why it’s a good idea, not how one would actually do it. There’s a suggestion that reflective equilibrium is a good start, but that competing ideas to CEV also include that.
Is there even a little material on how one would actually do CEV? Some “and then a miracle occurs” in the middle is fine for these purposes, we have human intelligences on hand.
Are these two papers really all there is to show so far for the concept of CEV?
It isn’t a subject that I would expect anyone from, say, SIAI to actually discuss honestly. Saying sane things about CEV would be a political minefield.
Expand? Are you talking about saying things about the output of CEV, or something else?
Not just the output, the input and means of computation are also potential minefields of moral politics. After all this touches on what amounts to the ultimate moral question: “If I had ultimate power how would I decide how to use it?” When you are answering that question in public you must use extreme caution, at least you must if you have any real intent to gain power.
There are some things that are safe to say about CEV, particularly things on a technical side. But for most part it is best to avoid giving too many straight answers. I said something on the subject of what can be considered the subproblem (“Do you confess to being consequentialist, even when it sounds nasty?”). Eliezer’s responses took a similar position:
When describing CEV mechanisms in detail from the position of someone with more than detached academic interest you are stuck between a rock and a hard place.
On one hand you must signal idealistic egalitarian thinking such that you do not trigger in the average reader those aversive instincts we have for avoiding human tyrants.
On the other hand you must also be aware that other important members (ie. many of those likely to fund you) of your audience will have a deeper understanding of the practical issues and will see the same description as naive to the point of being outright dangerous and destructive.
My application is so that an organisation can work out not only what people want from it but what they would want from it. This assumes some general intelligences on hand to do the working out, but we have those.
I’ve been transparent about CEV and intend to continue this policy.
Including the part where you claim you wish to run it on the entirety of humanity? Wow, that’s… scary. I have no good reason to be confident that I or those I care about would survive such a singularity.
Michael Vassar is usually the voice within SIAI of such concerns. It hasn’t been formally written up yet, but besides the Last Judge notion expressed in the original CEV paper, I’ve also been looking favorably on the notion of giving a binary veto over the whole process, though not detailed control, to a coherent extrapolated superposition of SIAI donors weighted by percentage of income donated (not donations) or some other measure of effort exerted.
And before anyone points it out, yes I realize that this would require a further amendment to the main CEV extrapolation process so that it didn’t deliberately try to sneak just over the veto barrier.
Look, people who are carrying the Idiot Ball just don’t successfully build AIs that match up to their intentions in the first place. If you think I’m an idiot, worry about me being the first idiot to cross the Idiot Finish Line and fulfill the human species’ destiny of instant death, don’t worry about my plans going right enough to go wrong in complicated ways.
Won’t this incentiveice people to lower their income in many situations because the fraction of their income being donations increases even if the total amount decreases?
In what situation would this be better than or easier than simply donating more, especially if percentage of income is considered over some period of time instead of simply “here it is?”
Only in situations in which the job allows for valueable ‘perks’ while granting a lower salary.
The thing that spooks me most about CEV (aside from the difficulty of gathering the information about what people really care about and the further difficulties of accurate extrapolition and some doubts about whether the whole thing can be made coherent) is that it seems to be planned to be a thing that will be perfected and then imposed, rather than a system which will take feedback from the people whose lives are theoretically being improved.
Excuse me if this has been an ongoing topic and an aspect of CEV which is at least being considered, but I don’t think I’ve seen this angle brought up.
Sure, and people feel safer driving than riding in an airplane, because driving makes them feel more in control, even though it’s actually far more dangerous per mile.
Probably a lot of people would feel more comfortable with a genie that took orders than an AI that was trying to do any of that extrapolating stuff. Until they died, I mean. They’d feel more comfortable up until that point.
Feedback just supplies a form of information. If you disentangle the I-want-to-drive bias and say exactly what you want to do with that information, it’ll just come out to the AI observing humans and updating some beliefs based on their behavior, and then it’ll turn out that most of that information is obtainable and predictable in advance. There’s also a moral component where making a decision is different from predictably making that decision, but that’s on an object level rather than metaethical level and just says “There’s some things we wouldn’t want the AI to do until we actually decide them even if the decision is predictable in advance, because the decision itself is significant and not just the strategy and consequences following from it.”
Clearly, I should have read new comments before posting mine.
I don’t think it’s the sense of control that makes people feel safer in a car so much as the fact that they’re not miles up in the air.
I’m pretty confident that people would feel more secure with a magical granter of wishes than a command-taking AI (provided that the granter was not an actual genie, which are known to be jerks,) because intelligent beings fall into a class that we are used to being able to comprehend and implement our desires, and AI fall into the same mental class as automated help lines and Microsoft Office assistants, which are incapable of figuring out what we actually want.
When you build automated systems capable of moving faster and stronger than humans can keep up with, I think you just have to bite the bullet and accept that you have to get it right. The idea of building such a system and then having it wait for human feedback, while emotionally tempting, just doesn’t work.
If you build an automatic steering system for a car that travels 250 mph, you either trust it or you don’t, but you certainly don’t let humans anywhere near the steering wheel at that speed.
Which is to say that while I sympathize with you here, I’m not at all convinced that the distinction you’re highlighting actually makes all that much difference, unless we impose the artificial constraint that the environment doesn’t get changed more quickly than a typical human can assimilate completely enough to provide meaningful feedback on.
I mean, without that constraint, a powerful enough environment-changer simply won’t receive meaningful feedback, no matter how willing it might be to take it if offered, any more than the 250-mph artificial car driver can get meaningful feedback from its human passenger.
And while one could add such a constraint, I’m not sure I want to die of old age while an agent capable of making me immortal waits for humanity to collectively and informedly say “yeah, OK, we’re cool with that.”
(ETA: Hm. On further consideration, my last paragraph is bogus. Pretty much everyone would be OK with letting everybody live until the decision gets made; it’s not a make-immortal vs. let-die choice. That said, there probably are things that have this sort of all-or-nothing aspect to them; I picked a poor example but I think my point still holds.)
If this concern is valid, to my understanding, then the optimal (perfect?) system that CEV puts in place will take the kind of feedback and adjust itself and so on. CEV just establishes the original configuration. For a crude metaphor: CVE is the thing writing the constitution and building the voting machines, but the constitutio0n can still have terms that allow the constitution to be changed according to the results of votes.
I would have described it as a system that is the ideal feedback taker (and anticipator).
Is that what they mean by “getting the inside track on the singularity”? ;-)
It gets to possibly say “No”, once. Nothing else.
Are you under the impression that jumping on statements like this, after the original statement explicitly disclaimed them, is a positive contribution to the conversation?
Yu’El—please don’t you jump on me! I was mostly trying to be funny. Check with my smiley!
This was a reference to Jaron Lanier’s comment on this topic—in discussion with you.
Woah there. I remind you that what prompted your first reply here was me supporting you on this particular subject!
I can sure see that in the fundraising prospectus. “We’ve been working on something but can’t tell you what it is. Trust us, though!”
Let’s assume things are better than that and it is possible to talk about CEV. Is anyone from SIAI in the house and working on what CEV means?
even if it comes out perfect hanson will just say that it’s based on far mode thinking and is thus incoherent WRT near values :p
what sort of person would I be if I was getting enough food, sex and sleep (the source of which was secure) to allow me to stay in far mode all the time? I have no idea.
A happily married (or equivalent) one? I am cosy in domesticity but also have a small child to divert my immediate energies, and I find myself regarding raising her as my important work and everything else as either part of that or amusement. Thankfully it appears raising a child requires less focused effort than ideal-minded parents seem to think (I can’t find the study quickly, sorry—anyone?), so this allows me to sit on the couch working or reading stuff while she plays, occasionally tweaking her flow of interesting stuff and occasionally dealing with her coming over and jumping up and down on me.
well you should be working on CEV and I shouldn’t.
Hence the question ;-)
Bad in bed, for a start. In far mode all the time?
No-one’s added sources to the CEV article other than primary ones, so I’ve merged-and-redirected it to the Friendly AI article. It can of course be un-merged any time.
Complete randomness that seemed appropriate for an open thread: I just noticed the blog post header on the OvercomingBias summary: “Ban Mirror Cells”
Which, it turned out when I read it, is about chirality, but which I had parsed as talking about mirror neurons, and the notion of wanting to ban mirror neurons struck me as delightfully absurd: “Darned mirror neurons! If I wanted to trigger the same cognitive events in response to doing something as in response to seeing it done by others, I’d join a commune! Darned kids, get off my lawn!”
Thankyou! I’d been mourning the loss. There have been plenty of things I had wanted to ask or say that didn’t warrant a post even here.
It occurs to me that the concept of a “dangerous idea” might be productively viewed in the context of memetic immunization: ideas are often (but not always) tagged as dangerous because they carry infectious memes, and the concept of dangerous information itself is often rejected because it’s frequently hijacked to defend an already internalized infectious memeplex.
Some articles I’ve read here seem related to this idea in various ways, but I can’t find anything in the Sequences or on a search that seems to tackle it directly. Worth writing up as a post?
This seems like a good audience to solve a tip-of-my-brain problem. I read something in the last year about subconscious mirroring of gestures during conversation. The discussion was about a researcher filming a family (mother, father, child) having a conversation, and analyzing a 3 second clip in slow motion for several months. The researcher noted an almost instantaneous mirroring of the speaker’s micro-gestures in the listeners.
I think that I’ve tracked the original researcher down to Jay Haley, though unfortunately the articles are behind a pay wall: http://onlinelibrary.wiley.com/doi/10.1111/j.1545-5300.1964.00041.x/abstract
What I can’t remember is who I was reading that referenced it. It was likely to be someone like Malcolm Gladwell or Jared Diamond. Does this strike a chord with anyone?
[For context, I was interested in understanding repeatable thought patterns that span two or more people. I’ve noticed that I have repeated sequences of thoughts, emotions, and states of mind, each reliable triggering the next. I’ve considered my identity at any point to be approximately the set of those repeated patterns. I think that when I’m in a relationship, I develop new sequences of thought/emotion that span my partner’s mind and my own—each state may be dependent on a preceding state in its own or the other mind. I’m wanting to understand the modalities by which a state in one mind could consistantly trigger a state in the other mind, how that ties in to those twins with conjoined brains, and if that implies a meaningful overlap in consciousness between myself and my wife.]
I’m not sure where you saw that reference, but I do know that there has been a fair amount of research on nonverbal mirroring, which is typically called “mimicry” (that’s the keyword I’d search for). Tanya Chartrand is one of the main researchers doing this work; here is her CV (pdf) which lists her published work (most of the relevant ones have “mimicry” in the title). Her 1999 paper with John Bargh (pdf) is probably the most prominent paper on the topic, and here (pdf) is a more recent paper which I picked because it’s available for free & it starts with a decent summary of existing research on the topic.
The kind of thing that you’re interested in—the development of thought/emotion/behavior patterns that span 2+ people in a relationship—is also an active topic of research. I don’t know as much about this area, but I do know that Mischel & Shoda have created one model of it. Here is the abstract of one paper of theirs which seems particularly relevant but doesn’t seem to be free online, and here (pdf) is another paper which is available for free.
Thank you! That information is very helpful.
Global Nuclear fuel bank reaches critical mass.
http://www.nytimes.com/2010/12/04/science/04nuke.html?_r=1&ref=science
I’m intrigued by this notion that the government solicited Buffett for the funding promise which then became a substantial chunk of the total startup capital. Did they really need his money, or were they looking for something else?
Yes, the very few people in government who care about nuclear proliferation really need private funds to do this, or even to do more obvious things, like decommission Russian warheads. This has been going on for twenty years. I wonder if Sam Nunn left the senate because he realized that such power was not helping him save the world.
I’m jealous of all these LW meetups happening in places that I don’t live. Is there not a sizable contingent of LW-ers in the DC area?
Can people do Saturday the 18th? 2pm? Bar or coffee?
I could make an appearance. I’m not super familiar with DC so staying pretty close to a metro station would be ideal.
I’d be interested. Though, IIRC Overcoming Bias organizes a few meetups.
There are a few of us. I’ve brought it up before I think. RobinZ is in the area and I thought PhilGoetz was but that might be old info.
I’m sure there are some lurking given our proximity to GMU.
ETA: Seriously though, if New Haven can pull off a meetup surely there are enough people in the DMV...
Where are we? lists several people in the DC area—and yes, PhilGoetz is/was among them.
I’m around.
LessWrong should have a section devoted to transhumanist topics.
It would be especially nice if we could build a collection of canonical explanations of recurring points, sort of like the sequences. People have been talking about a LW for existential risks for a long time; I think that topic fits naturally with transhumanism, the singularity, and predicting the future in general.
There aren’t enough posts on the site that including transhumanist ones on the main page can be considered a problem. Anything to get us away from exhortations to be nice and donate to charity!
What are some examples of topics you would like to discuss that you don’t think fit in the discussion section?
The singularity, intelligence explosion, cryonics, friendly AI, the SIAI, human intelligence enhancement
I think most of these topics fit in. Intelligence explosion/FAI/SIAI are all existential risk related making them important applied rationality topics. Cryonics as well; the SA incompetence thread continues to attract discussion. Any ability to enhance human intelligence has important consequences for rationality. The only thing you mention not directly related to rationality is the singularity.
I view the discussion section primarily as a forum for rationalists rather than as a forum for the discussion of rationality (though that is obviously a natural topic for rationalists). I am curious how much people disagree. If the discussion section got too busy, we could add further subreddits.
What are your reasons?
Lots of us are interested in the topic and write on it even when it doesn’t strictly relate to rationality. Segregating transhumanist posts would help us retain readers who are interested in increasing their rationality but would be turned off by posts such as the above one on CEV.
For selfish reasons I would benefit because I’m writing a book on singularity issues.
I agree for the reasons you’ve mentioned. Recently there has been a lot of discussion about what LessWrong should be about, and I think this is a good way to resolve that question.
Here’s a suggestion: What if we divided the forum into sub-pages by topic?
Agreed with Jack about the nuisance value of splitting up not being worth it for the current level of throughput.
If things do get significantly overloaded, a better (though harder to code) answer would be to allow individual users to define filters, and modify the site code to only display posts whose tags match the user’s filters.
I think the current level of throughput is low largely because there is no subsection of the kind James is arguing for; maybe there are arguments to be made that these topics are already appropriate in theory, but in practice I think the fear of being off-topic, the fear of being incorrectly perceived as off-topic, and the lack of active priming to post on these topics are inhibiting a lot of potential useful contributions, especially from more cautious members.
This kind of thing is a good long-term solution but is there enough discussion at this point to justify this kind of division? It would be inconvenient to have to click through every subforum I was interested in just to see if there was anything new.
You can use http://lesswrong.com/r/all/recentposts/ if you don’t mind manually skipping over threads from subforums you’re not interested in. (On Reddit you can put together more specific combinations like this, or just configure your front page to show particular reddits, but neither of those seems to work here.)
That’s true—I think TheOtherDave’s suggestion above might be something of a compromise.
In general, we could use a better search/tagging system. Open tagging would help.
A section even lesser than “Discussion” for off-topic chat?
They are not “lesser” but different.
Michael Vassar speaks: “Breakthrough Philanthropy”.