Link(s)?
ChrisHallquist
OH MY GOD. THAT WAS IT. THAT WAS VOLDEMORT’S PLAN. RATIONAL!VOLDEMORT DIDN’T TRY TO KILL HARRY IN GODRIC’S HOLLOW. HE WAITED ELEVEN YEARS TO GIVE HARRY A GRADE IN SCHOOL SO THAT ANY ASSASSINATION ATTEMPT WOULD BE IN ACCORDANCE WITH THE PROPHECY.
Duplicate comment, probably should be deleted.
Agreed. I actually looked up tax & spending for UK vs. Scandinavian countries, and they aren’t that different. It may not be a good distinction.
I thought of this last year after I completed the survey, and rated anti-agathics less probable than cryonics. This year I decided cryonics counted, and rated anti-agathics 5% higher than cryonics. But it would be nice for the question to be clearer.
Done, except for the digit ratio, because I do not have access to a photocopier or scanner.
Liberal here, I think my major heresy is being pro-free trade.
Also, I’m not sure if there’s actually a standard liberal view of zoning policy, but it often feels like the standard view is that we need to keep restrictive zoning laws in place to keep out those evil gentrifiers, in which case my support for loser zoning regulations is another major heresy.
You could argue I should call myself a libertarian, because I agree the main thrust of Milton Friedman’s book Capitalism and Freedom. However, I suspect a politician running on Friedman’s platform today would be branded a socialist if a Democrat, and a RINO if a Republican.
(Friedman, among other things, supported a version of guaranteed basic income. To which today’s GOP mainstream would probably say, “but if we do that, it will just make poor people even lazier!”)
Political labels are weird.
and anyone smart has already left the business since it’s not a good way of making money.
Can you elaborate? The impression I’ve gotten from multiple converging lines of evidence is that there are basically two kinds of VC firms: (1) a minority that actually know what they’re doing, make money, and don’t need any more investors and (2) the majority that exist because lots of rich people and institutions want to be invested in venture capital, can’t get in on investing with the first group, and can’t tell the two groups apart.
A similar pattern appears to occur in the hedge fund industry. In both cases, if you just look at the industry-wide stats, they look terrible, but that doesn’t mean that Peter Thiel or George Soros aren’t smart because they’re still in the game.
Hi! Welcome to LessWrong! A lot of people on LessWrong are worried about the problem you describe, which is why the Machine Intelligence Research Institute exists. In practice, the problem of getting an AI to share human values looks very hard. But, given that human values are implemented in human brains, it looks like it should be possible in principle to implement them in computer code as well.
I think the “Well-kept gardens die by pacifism” advice is cargo culted from a Usenet world where there weren’t ways to filter by quality aside from the binary censor/don’t censor.
Ah… you just resolved a bit of confusion I didn’t know I had. Eliezer often seems quite wise about “how to manage a community” stuff, but also strikes me as a bit too ban-happy at times. I had thought it was just overcompensation in response to a genuine problem, but it makes a lot more sense as coming from a context where more sophisticated ways of promoting good content aren’t available.
So regarding MIRI, you could say that experts disagreed about one of the 5 theses (intelligence explosion), as only 10% thought a human level AI could reach a strongly superhuman level within 2 years.
I should note that it’s not obvious what the experts responding to this survey thought “greatly surpass” meant. If “do everything humans do, but at x2 speed” qualifies, you might expect AI to “greatly surpass” human abilities in 2 years even on a fairly unexciting Robin Hansonish scenario of brain emulation + continued hardware improvement at roughly current rates.
I like the idea of this fanfic, it seems like it could have been executed much better.
EDIT: Try re-writing later? As the saying goes, “Write drunk; edit sober.”
So I normally defend the “trust the experts” position, and I went to grad school for philosophy, but… I think philosophy may be an area where “trust the experts” mostly doesn’t work, simply because with a few exceptions the experts don’t agree on anything. (Fuller explanation, with caveats, here.)
Have you guys given any thought to doing pagerankish stuff with karma?
Can you elaborate more? I’m guessing you mean people with more karma --> their votes count more, but it isn’t obvious how you do that in this context.
Everyone following the situation knew it was Eugine. At least one victim named him publicly. Sometimes he was referred to obliquely as “the person named in the other thread” or something like that, but the people who were following the story knew what that meant.
I’m glad this was done, if only to send a signal to the community that something is being done, but you have a point that this is not an ideal solution and I hope a better one is implemented soon.
I’m not sure how to respond to this comment, given that it contains no actual statements, just rhetorical questions, but the intended message seems to be “F you for daring to cause Eliezer pain, by criticizing him and the organization he founded.”
If that’s the intended message, I submit that when someone is a public figure, who writes and speaks about controversial subjects and is the founder of an org that’s fairly aggressive about asking people for money, they really shouldn’t be insulated from criticism on the basis of their feelings.
The reason that nothing has been done about it is that Eliezer doesn’t care. And he may well have good reasons not to, but he never commented on the issue, except maybe once when he mentioned something about not having technical capabilities to identify the culprits (which is no longer a valid statement).
My guess is that he cares not nearly as much about LW in general now as he used to...
This. Eliezer clearly doesn’t care about LessWrong anymore, to the point that these days he seems to post more on Facebook than on LessWrong. Realizing this is a major reason why this comment is the first anything I’ve posted on LessWrong in well over a month.
I know a number of people have been working on launching a LessWrong-like forum dedicated to Effective Altruism, which is supposedly going to launch very soon. Here’s hoping it takes off—because honestly, I don’t have much hope for LessWrong at this point.
- 24 Nov 2014 18:32 UTC; -1 points) 's comment on Breaking the vicious cycle by (
...my motivation has been “I see people around me succeeding by these means where I have failed, and I want to be like them”.
Seems like noticing yourself wanting to imitate successful people around you should be an occasion for self-scrutiny. Do you really have good reasons to think the things you’re imitating them on are the cause of their success? Are the people you’re imitating more successful than other people who don’t do those things, but who you don’t interact with as much? Or is this more about wanting to affiliate the high-status people you happen to be in close proximity to?
On philosophy, I think it’s important to realize that most university philosophy classes don’t assign textbooks in the traditional sense. They assign anthologies. So rather than read Russell’s History of Western Philosophy or The Great Conversation (both of which I’ve read), I’d recommend something like The Norton Introduction to Philosophy.