Has been a lot of open threads in the past, but not recently. It is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets good enough, open a new top-level post. Besides, I have an issue to discuss it here. Enjoy!
On March 5 I moved from Russia to Switzerland and now work for Google Zurich. I guess this makes me available for new and exciting plots :-) Any LWers nearby?
Congratulations! But frankly I’m a bit concerned. You described your previous job as leaving you plenty of free time to work on problems of interest to yourself (or perhaps more to the point, problems that are of interest to me :). Does it look like Google will be working you harder?
Yeah, it does so far. But even in my first three days at Google I’ve found plenty of free time to solve some problems concerning Solomonoff induction and managed to prove a tricky point to paulfchristiano :-)
We were discussing at the London meet on Sunday how everyone on LW is called Vladimir or Dave. Thus, everyone who isn’t should seriously consider changing their name forthwith.
I worked once at a small startup (very small; I was employee #6) at which two of us were named Dave, which led to many funny moments, or at least moments made funny by stress and sleep deprivation.
Some time afterwards we hired a developer named Dave, which resulted in a development group composed entirely of Daves.
We then hired a QA engineer into the group, who wasn’t named Dave, and we decided that this was an unacceptable violation of development-group polymorphism, so we officially renamed him “Dave.” I think we even gave him a certificate.
This got even weirder than I thought. When I came to log in, I made an account and typed my first post as OtherDavid, and just before hitting send noticed that one of the most recent comments in the sidebar was from TheOtherDave.
My new username is an attempt to be slightly more distinctive within the Dave subset ‘people referencing they are called Dave’.
It’s not LW’s fault, mind. I recently started a new job and was the third David there, in a team of under 20. I didn’t want to be ‘Dave’ or known by my surname, so ended up calling myself D3.
Congratulations. Were you headhunted or did you get a referral or apply? IIRC from some thread on HN before you don’t interview well. Good you got over that.
I was referred by someone who found me online (on LiveJournal, of all places). It’s true that I often fail interviews where they ask about technology specifics like API calls, because I’ve given up memorizing this stuff long ago. Math and algorithms, on the other hand, are easy and I never failed an interview on those. The best interviews are the ones where I get to write code and explain it :-)
It’s true that I often fail interviews where they ask about technology specifics like API calls, because I’ve given up memorizing this stuff long ago. Math and algorithms, on the other hand, are easy and I never failed an interview on those. The best interviews are the ones where I get to write code and explain it
This sounds like a good filter to get hired by the right sort of software company.
If I find a comment that I disagree with, and post a reply explaining why I disagree with it, can I also down-vote that comment or is it considered petty to do both? What if someone if someone posts a reply to one of my comments disagreeing with me, and I am not swayed and still think they are wrong, do I down-vote them or does only have the effect of punishing disagreement?
Not a question of much significance, but this is probably the most appropriate place to ask.
No clue about a general answer, so I’ll give you mine. I figure if enough people do that, a general answer may emerge.
I consider the two independent. That is, I’ll downvote if I think it’s worth downvoting, and I’ll reply if I think it’s worth replying, and I’ll do both if I think both. I don’t think that’s petty.
I endorse adding an explanation of the downvote when i do both, although I don’t always actually do it.
As for when to downvote; the rule of thumb is “downvote what you want less of; upvote what you want more of.”
Beyond that, well, individual preferences differ. For my own part, I don’t want less disagreement per se so I don’t downvote things I disagree with, although many things I downvote I also disagree with. But some people do.
I generally hold off on downvoting any posting I’m publicly engaging with. It feels too much like discussion as a fight, in which I am armed not only with the Broadsword of Argument, but also the Flintlock of Karma.
When I downvote anything, it’s for more than just simple disagreement, but something beyond that: insults, persistence in refusing to read background information, etc. I’ve even upvoted things I disagreed with, for making an argument that was worth making, if only to see it refuted, and I’ve downvoted things I agreed with, on grounds such as triviality or irrelevance.
ETA: I also never mention any specific up or downvotes.
In general, I will only downvote a comment in a thread that I have commented in if the comment is egregiously wrong or rude. I don’t feel any similar restraint against upvoting comments that seem correct even if they agree with me. I also will upvote comments that disagree with me if they make good points even if I think they are overall wrong. I don’t know how common this approach is.
My take: if it is a conversation with several people or is a first-level comment, it is fair game. If it is a two-person conversation, it generally bad form.
Ok, so I some time ago had an idea, but haven’t posted about it because it seemed to dangerous. However, I just came up with a workaround for that.
Things that speak for telling someone: Clever, interesting, fun, demonstrates TDT stuff, and sounds like the kind of thing that likely have consequences I haven’t thought of.
Things that go against telling it: It being the case does not seem very probably, it’s not very closely checked or rigorous, falls into a reference class with a bad history, and has a small but significant chance of being a dangerous roko-style memetic hazard.
My idea for a solution: anyone who wants to know about it AND have gone thought the Roko thing unharmed thus having proven reasonably resistant to these things, please PM me for discussion to see if it’s worthwhile/harmless enough to make a public post about.
(Note to people voting on this post: Unless stated otherwise in a comment, I’ll interpret upvotes as approval to handling this kind of danger this way and downvotes as saying I should have just posted it publicly right away. (Not that I would have done that even if I though people wanted it, still to risky))
Proposal: should there be Non-Rationality Quote Posts? These would be a place to post quotes which are not closely related to rationality, but which may be of interest to various people in the Less Wrong community.
(Poll instructions: vote up your preferred option; then vote down the karma balance.)
That would be Eray Ozkural at the end. The question section is somewhat entertaining—at least for me. On the one hand, it is kind-of fun to see researchers touching SIAI positions—but on the other hand rather disappointing that they didn’t do a better job of it.
For instance, when Roko said he was kind-of OK with crushing non-governmental research, in addition to “what a shame”, it also seems important to add: “how?” and “will the Eastern block agree to that?” and “do you really think that is going to work?”
I don’t have any personal investment or affection for the Lifeboat Foundation or Richard Loosemore. But the linked article lost me when it cited the The Southern Poverty Law Center as the “gold standard for monitoring and classifying hate groups.” I’m under the impression that the SPLC is not at all neutral or rationalist, but rather is a heavily ideological and political organization.
The SPLC definitely has issues, but the rest of the article speaks for itself and provides more than enough evidence without that part. There are serious existential risks posed by science and technology, but it seems clear from this that the Lifeboat Foundation isn’t a good group to handle or discuss those issues.
Now that I’ve read the article, I do think it’s actually worth discussing.
The anchor text is critically misleading, by conflating Eric Klien with the Lifeboat Foundation; the substance of the linked article is an exposé on how Klien is basically playing the Foundation to further his neoconservative agenda.
The article is a little too well-sourced to easily blow off; notably, it includes a link to a Google-cached copy of Klien’s blog in which he says, among other things:
I have developed Lifeboat Foundation with a Trojan Horse meme that tries to wrap our goals in the Religion of Science memes.
The anchor text is critically misleading, by conflating Eric Klien with the Lifeboat Foundation;
Eric Klein is founder and president of the Lifeboat Foundation. The article does bash the foundation independently from its bashing of Eric—as follows:
It is worth adding that the Lifeboat Foundation is also spectacularly ineffectual. All it seems to do is have a discussion list where a few people argue occasionally, and a set of boards where (as far as I can see) nothing happens.
Or, looking at it another way, the Lifeboat Foundation is very effective indeed. It raises money, and it supplies a very high profile to its founder, Eric Klien. As far as I can see, that is all it has ever done.
When I read it in its entirety, the linked article didn’t really worry me with respect to secret motives. I am, though, worried about effectiveness. The tone of Klien’s post was self-indulgent and melodramatic, and I’m not sure that’s the best way to attract effective allies who can help with material gains. Anyone have more info on how effective the lifeboat foundation is at promoting existential risk mitigation?
it includes a link to a Google-cached copy of Klien’s blog in which he says, among other things:
I have developed Lifeboat Foundation with a Trojan Horse meme that tries to wrap our goals in the Religion of Science memes.
I’m not yet convinced that Klien actually wrote the Trojan horse bit. Geller says he did. But see this 2007 article from Geller’s blog in which Klien talks about “Religion of Science” but not about “Trojan horses”.
Geller leaves me totally disgusted for her hate-mongering, and Klien strikes me as a complete creep for publicly fantasizing about using nanotech to spy on women in the shower. But I don’t see any real anti-science agenda on Klien’s part.
Perhaps we should see what Geller and Klien say. Did Geller really mess with Klien’s article? It does seem strange to leave such an self-incriminating document just lying around on the internet for years.
You surely have to be nuts to write “TROJAN HORSE” on the side of your Trojan horse.
I ran into a nice little piece about the interaction between logic and probability. It discusses work by Michael Hardy (found here) which helps clarify how logic and probability should behave when propositions are uncertain.
The “Discussion” section has taken over from the old open thread system.
You were corrected conclusively when making this point last month. Did you read the (heavily upvoted) replies? Similar discussions—with the same conclusion—have been had in earlier threads as well and require no further reiteration.
Open threads continue to be used in the discussion section for things that are not worth creating a whole discussion thread on but which are nevertheless worth making a comment on.
Advice to Thomas: Make threads like this only when you actually have a comment to make that requires it! On-demand instantiation!
Make threads like this only when you actually have a comment to make that requires it! On-demand instantiation!
My experience as a computer scientist/software engineer tells me that this is obviously right, but given that humans are not computers, that their implementation of on-demand instantiation does not abstract away the complexity behind a call to an accessor function but instead requires the human to do deliberate, conscious work which can make them give up on the whole thing even though it is totally easy, I think it alright if someone wants to create an open thread for the month that will be already in place when someone else wants to add a comment to it.
EDIT: All right, but at least post Open Threads in the Discussion section?
Yes. (I am perturbed that I didn’t notice before that this wasn’t in the discussion section, where I would expect an open thread to be.) And this one should be moved.
On March 5 I moved from Russia to Switzerland and now work for Google Zurich. I guess this makes me available for new and exciting plots :-) Any LWers nearby?
Congratulations! But frankly I’m a bit concerned. You described your previous job as leaving you plenty of free time to work on problems of interest to yourself (or perhaps more to the point, problems that are of interest to me :). Does it look like Google will be working you harder?
Yeah, it does so far. But even in my first three days at Google I’ve found plenty of free time to solve some problems concerning Solomonoff induction and managed to prove a tricky point to paulfchristiano :-)
If you want someone intelligent to argue with, look up Adam Wildawsky @your office—he’s a devout Randian :)
Politics is the mind-killer.
We were discussing at the London meet on Sunday how everyone on LW is called Vladimir or Dave. Thus, everyone who isn’t should seriously consider changing their name forthwith.
I worked once at a small startup (very small; I was employee #6) at which two of us were named Dave, which led to many funny moments, or at least moments made funny by stress and sleep deprivation.
Some time afterwards we hired a developer named Dave, which resulted in a development group composed entirely of Daves.
We then hired a QA engineer into the group, who wasn’t named Dave, and we decided that this was an unacceptable violation of development-group polymorphism, so we officially renamed him “Dave.” I think we even gave him a certificate.
This is why tech support is called Bob—all tech support at Demon Internet at one point in the mid-1990s were called Bob.
More generally, it’s the London Dave Problem. (e.g. I get “Diva”. This was the name of my cat.)
[I’ve just worked out what function the Open Thread serves now!]
Exactly: it’s for discussion of topics too trivial for the Discussion section.
Beats waiting around for an opportunity to hijack a comments thread from some other post, I suppose.
It’s a conspiracy, I tell you!
This got even weirder than I thought. When I came to log in, I made an account and typed my first post as OtherDavid, and just before hitting send noticed that one of the most recent comments in the sidebar was from TheOtherDave.
My new username is an attempt to be slightly more distinctive within the Dave subset ‘people referencing they are called Dave’.
It’s not LW’s fault, mind. I recently started a new job and was the third David there, in a team of under 20. I didn’t want to be ‘Dave’ or known by my surname, so ended up calling myself D3.
Armor digivolution!
I mostly got R2D2 type jokes at work. Which then led to a whole Star Wars “who in the office is the Dark Lord of the Sith’ thing.
Congratulations. Were you headhunted or did you get a referral or apply? IIRC from some thread on HN before you don’t interview well. Good you got over that.
I was referred by someone who found me online (on LiveJournal, of all places). It’s true that I often fail interviews where they ask about technology specifics like API calls, because I’ve given up memorizing this stuff long ago. Math and algorithms, on the other hand, are easy and I never failed an interview on those. The best interviews are the ones where I get to write code and explain it :-)
This sounds like a good filter to get hired by the right sort of software company.
Minor question about netiquette for this site.
If I find a comment that I disagree with, and post a reply explaining why I disagree with it, can I also down-vote that comment or is it considered petty to do both? What if someone if someone posts a reply to one of my comments disagreeing with me, and I am not swayed and still think they are wrong, do I down-vote them or does only have the effect of punishing disagreement?
Not a question of much significance, but this is probably the most appropriate place to ask.
No clue about a general answer, so I’ll give you mine. I figure if enough people do that, a general answer may emerge.
I consider the two independent. That is, I’ll downvote if I think it’s worth downvoting, and I’ll reply if I think it’s worth replying, and I’ll do both if I think both. I don’t think that’s petty.
I endorse adding an explanation of the downvote when i do both, although I don’t always actually do it.
As for when to downvote; the rule of thumb is “downvote what you want less of; upvote what you want more of.”
Beyond that, well, individual preferences differ. For my own part, I don’t want less disagreement per se so I don’t downvote things I disagree with, although many things I downvote I also disagree with. But some people do.
I agree.
I agree.
I generally hold off on downvoting any posting I’m publicly engaging with. It feels too much like discussion as a fight, in which I am armed not only with the Broadsword of Argument, but also the Flintlock of Karma.
When I downvote anything, it’s for more than just simple disagreement, but something beyond that: insults, persistence in refusing to read background information, etc. I’ve even upvoted things I disagreed with, for making an argument that was worth making, if only to see it refuted, and I’ve downvoted things I agreed with, on grounds such as triviality or irrelevance.
ETA: I also never mention any specific up or downvotes.
In general, I will only downvote a comment in a thread that I have commented in if the comment is egregiously wrong or rude. I don’t feel any similar restraint against upvoting comments that seem correct even if they agree with me. I also will upvote comments that disagree with me if they make good points even if I think they are overall wrong. I don’t know how common this approach is.
My take: if it is a conversation with several people or is a first-level comment, it is fair game. If it is a two-person conversation, it generally bad form.
Ok, so I some time ago had an idea, but haven’t posted about it because it seemed to dangerous. However, I just came up with a workaround for that.
Things that speak for telling someone: Clever, interesting, fun, demonstrates TDT stuff, and sounds like the kind of thing that likely have consequences I haven’t thought of. Things that go against telling it: It being the case does not seem very probably, it’s not very closely checked or rigorous, falls into a reference class with a bad history, and has a small but significant chance of being a dangerous roko-style memetic hazard.
My idea for a solution: anyone who wants to know about it AND have gone thought the Roko thing unharmed thus having proven reasonably resistant to these things, please PM me for discussion to see if it’s worthwhile/harmless enough to make a public post about.
(Note to people voting on this post: Unless stated otherwise in a comment, I’ll interpret upvotes as approval to handling this kind of danger this way and downvotes as saying I should have just posted it publicly right away. (Not that I would have done that even if I though people wanted it, still to risky))
Proposal: should there be Non-Rationality Quote Posts? These would be a place to post quotes which are not closely related to rationality, but which may be of interest to various people in the Less Wrong community.
(Poll instructions: vote up your preferred option; then vote down the karma balance.)
Karma balance
Yes
No
No, there should be no Non-Rationality Quote Posts.
Yes, there should be Non-Rationality Quote Posts.
Karma balance to Non-Rationalty Quotes thread poll.
Eurgh, that didn’t work at all. I didn’t balance the karma to balance out those who didn’t upvote the original but downvoted the balance.
What are you balancing? By my count, there are 19 upvotes on poll answers and only 17 downvotes on the karma balance.
Roko Mijic—Bootstrapping safe AGI goal systems. 80% Roko, 20% Mark Waser.
“Lovely AI”: friendlier-than-thou. So cheesy, though.
Hmm, a lot of the people in the question section need to learn what a question is.
That would be Eray Ozkural at the end. The question section is somewhat entertaining—at least for me. On the one hand, it is kind-of fun to see researchers touching SIAI positions—but on the other hand rather disappointing that they didn’t do a better job of it.
For instance, when Roko said he was kind-of OK with crushing non-governmental research, in addition to “what a shame”, it also seems important to add: “how?” and “will the Eastern block agree to that?” and “do you really think that is going to work?”
Today’s xkcd deals with some themes relevant to Less Wrong.
Revealed preference theory wins again.
Cracked has an article about how many common forms of socially conscious actions don’t do anything helpful or actively hurt. Essentially fuzzies v. utilons in a nutshell.
http://www.indystar.com/apps/pbcs.dll/article?AID=2011103200369
SIAI, hire this kid before string theory gets him!
Richard Loosemore powns the Lifeboat Foundation.
There’s no O in ’pwn”.
That’s got to be the first time I’ve heard someone correct the spelling of gamer jargon.
THANK YOU.
Oops! My finger must have slipped!
I don’t have any personal investment or affection for the Lifeboat Foundation or Richard Loosemore. But the linked article lost me when it cited the The Southern Poverty Law Center as the “gold standard for monitoring and classifying hate groups.” I’m under the impression that the SPLC is not at all neutral or rationalist, but rather is a heavily ideological and political organization.
The SPLC definitely has issues, but the rest of the article speaks for itself and provides more than enough evidence without that part. There are serious existential risks posed by science and technology, but it seems clear from this that the Lifeboat Foundation isn’t a good group to handle or discuss those issues.
Comments from some folk they smeared.
Now that I’ve read the article, I do think it’s actually worth discussing.
The anchor text is critically misleading, by conflating Eric Klien with the Lifeboat Foundation; the substance of the linked article is an exposé on how Klien is basically playing the Foundation to further his neoconservative agenda.
The article is a little too well-sourced to easily blow off; notably, it includes a link to a Google-cached copy of Klien’s blog in which he says, among other things:
Eric Klein is founder and president of the Lifeboat Foundation. The article does bash the foundation independently from its bashing of Eric—as follows:
When I read it in its entirety, the linked article didn’t really worry me with respect to secret motives. I am, though, worried about effectiveness. The tone of Klien’s post was self-indulgent and melodramatic, and I’m not sure that’s the best way to attract effective allies who can help with material gains. Anyone have more info on how effective the lifeboat foundation is at promoting existential risk mitigation?
I’m not yet convinced that Klien actually wrote the Trojan horse bit. Geller says he did. But see this 2007 article from Geller’s blog in which Klien talks about “Religion of Science” but not about “Trojan horses”.
Geller leaves me totally disgusted for her hate-mongering, and Klien strikes me as a complete creep for publicly fantasizing about using nanotech to spy on women in the shower. But I don’t see any real anti-science agenda on Klien’s part.
Perhaps we should see what Geller and Klien say. Did Geller really mess with Klien’s article? It does seem strange to leave such an self-incriminating document just lying around on the internet for years.
You surely have to be nuts to write “TROJAN HORSE” on the side of your Trojan horse.
Ah, I’d missed earlier that the blog where that was cached from wasn’t actually Klien’s website.
We already know he’s neoconservative. It’s not that much of a stretch.
Hah! That’s the spirit! Hit me with those downvotes! Shoot the messenger!
Joshua Greene—Beyond Point-and-Shoot Morality—video for “Harvard Thinks Big”. An 11 minute lecture. Joshua Greene’s thesis is well known around here.
It’s about trolley problems and the ethics of helping strangers.
Note: this link has subsequently been posted again here.
I ran into a nice little piece about the interaction between logic and probability. It discusses work by Michael Hardy (found here) which helps clarify how logic and probability should behave when propositions are uncertain.
The “Discussion” section has taken over from the old open thread system.
EDIT: All right, but at least post Open Threads in the Discussion section?
You were corrected conclusively when making this point last month. Did you read the (heavily upvoted) replies? Similar discussions—with the same conclusion—have been had in earlier threads as well and require no further reiteration.
Open threads continue to be used in the discussion section for things that are not worth creating a whole discussion thread on but which are nevertheless worth making a comment on.
Advice to Thomas: Make threads like this only when you actually have a comment to make that requires it! On-demand instantiation!
My experience as a computer scientist/software engineer tells me that this is obviously right, but given that humans are not computers, that their implementation of on-demand instantiation does not abstract away the complexity behind a call to an accessor function but instead requires the human to do deliberate, conscious work which can make them give up on the whole thing even though it is totally easy, I think it alright if someone wants to create an open thread for the month that will be already in place when someone else wants to add a comment to it.
That said, there wasn’t actually all that much discussion—though the low cost of threads probably means they’re still worthwhile.
Yes. (I am perturbed that I didn’t notice before that this wasn’t in the discussion section, where I would expect an open thread to be.) And this one should be moved.
My karma points were counted as it was in this section (1 point each vote) as it was here, even when it wasn’t. An interesting bug?
And another new xkcd that is highly relevant to LW themes. I almost have to wonder if Munroe is lurking here.
Just stumbled upon a fascinating link that may spawn some interesting discussion here: http://www.youtube.com/watch?v=yfRVCaA5o18&feature=player_embedded
Congrats to Eliezer for passing the 100000 Karma point!