I spent my childhood believing I was destined to be a hero
in some far off magic kingdom.
It was too late when I realized that I was needed here.
Vulture
Something that just occurred to me (separate from my took-it comment): Scott, do you take your own survey?
For what it’s worth, I perceived the article as more affectionate than offensive when I initially read it. This may have something to do with full piece vs. excerpts, so I’d recommend reading the full piece (which isn’t that much longer) first if you care.
- 12 Dec 2014 23:49 UTC; 1 point) 's comment on Harper’s Magazine article on LW/MIRI/CFAR and Ethereum by (
I think LW has the right kind of community for Polymath, and I think it’s a good idea to give it a try.
Eliezer seems to be really really bad at acquiring or maintaining status. I don’t know how aware of this fault he is, since part of the problem is that he consistently communicates as if he’s super high status.
Except that the broad umbrella of ideas which even a reasonable person might construe to be “racist, misogynist, or queerphobic” covers a lot of things which are nowhere close to being settled questions the way theism is.
I think the survey is pushed by SJW trolls
What does this even mean?
I almost didn’t click on this submission because I was preconsciously thinking “Oh, that isn’t directed at me, cause I don’t take supplements”. Then I realized that that was stupid and took it. I think you will probably get some pretty serious selection bias of that sort, though.
Pure curiousity question: What is the general status of UDT vs. TDT among yall serious FAI research people? MIRI’s publications seem to exclusively refer to TDT; people here on LW seem to refer pretty much exclusively to UDT in serious discussion, at least since late 2010 or so; I’ve heard it reported variously that UDT is now standard because TDT is underspecified, and that UDT is just an uninteresting variant of TDT so as to hardly merit its own name. What’s the deal? Has either one been fully specified/formalized? Why is there such a discrepancy between MIRI’s official work and discussion here in terms of choice of theory?
- 31 Jul 2014 4:02 UTC; 3 points) 's comment on Open thread, July 28 - August 3, 2014 by (
[N]ature is constantly given human qualities. Wordsworth wrote that “nature never did betray the heart that loved her.” Mother Nature has comforted us in every culture on earth. In the 20th and 21st centuries, some environmentalists claimed that the entire earth is a single ecosystem, a “superorganism” in the language of Gaia.
I would argue that we have been fooling ourselves. Nature, in fact, is mindless. Nature is neither friend nor foe, neither malevolent nor benevolent.
Nature is purposeless. Nature simply is. We may find nature beautiful or terrible, but those feelings are human constructions. Such utter and complete mindlessness is hard for us to accept. We feel such a strong connection to nature. But the relationship between nature and us is one-sided. There is no reciprocity. There is no mind on the other side of the wall. That absence of mind, coupled with so much power, is what so frightened me...
The government, though, was a different matter all together. I assumed that a lot of very smart people had put a lot of effort into its design — that’s what the “Founding Fathers” meme implied, anyway.
I’ve always taken the framing of the US Constitution as a cautionary tale about the importance of getting things exactly right. The founding fathers were highly intelligent (some of them, anyway), well-read and fastidious; after a careful review of numerous different contemporary and historical government systems, from the Iriquois confederacy to ancient Greek city-states, they devised a very clever, highly non-obvious alternative designed to be watertight against any loopholes they could think of, including being self-modifying in carefully regulated ways.
It almost worked. They created a system that came very, very close to preventing dictatorship and oligarchy… and the United States today is a grim testament to what happens when you cleverly construct an optimization engine that almost works.
Tulpa References/Discussion
Each normal person’s normalized utility without hearing the symphony is 0.99999. Hearing the symphony would make it 1.00000. The Beethoven utility monster would be at 0 without hearing the symphony and 1 hearing it. Thus, if we directly sum normalized utilities, it’s better for the Beethoven utility monster to hear the symphony than for 90,000 regular people to do the same.
This seems suspicious.
Am I the only one who doesn’t find this suspicious at all? After all, the Beethoven utility monster would gain 100,000 times as much fulfillment from the symphony as the normal people; it makes intuitive sense to me that it would be unfair to deny the BUM the opportunity to hear Beethoven’s ninth just so that, say, 100 normal people could hear it. After all, those people wouldn’t be that much worse off not having heard the symphony, which the BUM would rather die than not hear.
Obviously this intuition breaks down in a lot of similar thought experiments (should we let the BUM run over pedestrians in the road on its way to Carnegie Hall? etc.) but if the goal is to show that summing normalized utility can give undesirable or unintuitive results, that particular thought experiment isn’t really ideal.
rewarding the “ability” to entertain any argument “no matter how ‘politically incorrect’” (to break out of some jargon, “no matter how likely to hurt people”) results in a system that prizes people who have not been socially marginalized or who have been socially marginalized less than a given other person in the discussion
To paraphrase: Our community is exclusionary in the sense that its standards for what constitutes an information hazard (and thus a Forbidden Topic) are as stingy as possible, which means that it can’t be guaranteed safe for people more vulnerable to psychological damage by ideas than the typical LessWrong crowd.
It’s possible that this problem could be resolved with a more comprehensive “trigger warning” tagging system and a filtering system akin to tumblr savior. Then there could be a user preference with a list of checkboxes, e.g.
Hide comments and posts about
[ ] Race
[x] Gender
[ ] Sexual Violence
etc.
This could also double as protection for people who want to participate in LessWrong but have, for example, Posttraumatic Stress Disorder which could be triggered by some topics.
And here it is, as a pdf! (I finally thought of trying to log in as a subscriber)
People who are often misunderstood: 6% geniuses; 94% garden-variety nonsense-spouters
A side note to your otherwise excellent comment:
“we only care about the ‘good’ people (women, black, trans, etc.)”
As someone from the other side of the fence, I should warn you that your model of how liberals think about social justice seems to be subtly but significantly flawed. My experience is that virtually no liberals talk or (as far as I can tell) think in terms of “good” vs. “bad” people, or more generally in terms of people’s intrinsic moral worth. A more accurate model would probably be something like “we should only be helping the standard ‘oppressed’ people (women, black, trans, etc.)”. The main difference being that real liberals are far more likely to think in terms of combating social forces than in terms of rewarding people based on their merit.
I’m pretty sure “facist” is a misspelling of “fascist”, not of “racist”. Also, it would seem that the word “rake” has some colloquial meaning that I’ve never heard before. From context I assume it’s something like “willfully evil person”, but I don’t actually know.
Haven’t tried it myself, but it seems to work for Scott Alexander
Taken! The way you were being so apologetic about the length, I thought it would be much more grueling—I found it quick and fun! :)