Sounds like the Buddha and his followers to me.
JoachimSchipper
patio11 is something of a “marketing engineer”, and his target audience is young software enthusiasts (Hacker News). What makes you think that this isn’t pretty specific advice for a fairly narrow audience?
Spoiler: Gura ntnva, gur nyvra qbrf nccneragyl znantr gb chg n onpxqbbe va bar bs gur uhzna’f oenvaf.
I agree that the AI you envision would be dangerously likely to escape a “competent” box too; and in any case, even if you manage to keep the AI in the box, attempts to actually use any advice it gives are extremely dangerous.
That said, I think your “half an inch” is off by multiple orders of magnitude.
My comment was mostly inspired by (known effective) real-world examples. Note that relieving anyone who shows signs of being persuaded is a de-emphasized but vital part of this policy, as is carefully vetting people before trusting them.
Actually implementing a “N people at a time” rule can be done using locks, guards and/or cryptography (note that many such algorithms are provably secure against an adversary with unlimited computing power, “information theoretic security”).
Note that the AI box setting is not one which security-minded people would consider “competent”; once you’re convinced that AI is dangerous and persuasive, the minimum safeguard would be to require multiple people to be present when interacting with the box, and to only allow release with the assent of a significant number of people.
It is, after all, much harder to convince a group of mutually-suspicious humans than to convince one lone person.
(This is not a knock on EY’s experiment, which does indeed test a level of security that really was proposed by several real-world people; it is a knock on their security systems.)
For me, high (insight + fun) per (time + effort).
(Are you sure you want this posted under what appears to be a real name?)
I have no problem with this passage. But it does not seem obviously impossible to create a device that stimulates that-which-feels-rightness proportionally to (its estimate of) the clippiness of the universe—it’s just a very peculiar kind of wireheading.
As you point out, it’d be obvious, on reflection, that one’s sense of rightness has changed; but that doesn’t necessarily make it a different qualia, any more than having your eyes opened to the suffering of (group) changes your experience of (in)justice qua (in)justice.
Consider this explanation, too.
I don’t think it’s unfair to put some restrictions on the universes you want to describe. Sure, reality could be arbitrarily weird—but if the universe cannot even be approximated within a number of bits much larger than the number of neurons (or even atoms, quarks, whatever), “rationality” has lost anyway.
(The obvious counterexample is that previous generations would have considered different classes of universes unthinkable in this fashion.)
It’s not too hard to write Eliezer’s 2^48 (possibly invalid) games of non-causal-Life to disk; but does that make any of them real? As real as the one in the article?
It’s true that intelligence wouldn’t do very well in a completely unpredictable universe; but I see no reason why it doesn’t work in something like HPMoR, and there are plenty of such “almost-sane” possibilities.
This comment is relevant.
Mostly, what David_Gerard says, better than I managed to express it; in part, “be nice to whatever minorities you have”; and finally, yes, “this is a good cause; we should champion it”. “Arguments as soldiers” is partly a valid criticism, but note that we’re looking at a bunch of narratives, not a logical argument; and note that very little “improvement of the other’s arguments” seem to be going on.
All of what you say is true; it is also true that I’m somewhat thin-skinned on this point due to negative experiences on non-LW fora; but I also think that there is a real effect. It is true that the comments on this post are not significantly more critical/nitpicky than the comments on How minimal is our intelligence. However, the comments here do seem to pick far more nits than, say, the comments on How to have things correctly.
The first post is heavily fact-based and defends a thesis based on—of necessity—incomplete data and back-projection of mechanisms that are not fully understood. I don’t mean to say that it is a bad post; but there are certainly plenty of legitimate alternative viewpoints and footnotes that could be added, and it is no surprise that there are a lot of both in the comments section.
The second post is an idiosyncratic, personal narrative; it is intended to speak a wider truth, but it’s clearly one person’s very personal view. It, too, is not a bad post; but it’s not a terribly fact-based one, and the comments find fewer nits to pick.
This post seems closer to the second post—personal narratives—but the comment section more closely resembles that of the first post.
As to the desirability of this effect: it’s good to be a bit more careful around whatever minorities you have on the site, and this goes double for when the minority is trying to express a personal narrative. I do believe there are some nits that could be picked in this post, but I’m less convinced that the cumulative improvement to the post is worth the cumulative… well, not quite invalidation, but the comments section does bother me, at least.
- Nov 26, 2012, 8:49 PM; 1 point) 's comment on LW Women- Minimizing the Inferential Distance by (
If a post has 39 “short comments saying “I want to see more posts like this post.”″ and 153 nitpicks, that says something about the community reaction. This is especially relevant since “but this detail is wrong” seems to be a common reaction to these kinds of issues on geek fora.
(Yes, not nearly all posts are nitpicks, and my meta-complaining doesn’t contribute all that much signal either.)
One relevant datum: when I started my studies in math, about 33% of the students was female. In the same year, about 1% (i.e. one) of the computer science students was female.
It’s possible to come up with other reasons—IT is certainly well-suited to people who don’t like human interaction all that much—but I think that’s a significant part of the problem.
It bothers me how many of these comments pick nits (“plowing isn’t especially feminine”, “you can’t unilaterally declare Crocker’s Rules”) instead of actually engaging with what has been said.
(And those are just women’s issues; women are not the only group that sometimes has problems in geek culture, or specifically on Less Wrong.)
- Nov 25, 2012, 12:48 AM; 11 points) 's comment on LW Women- Minimizing the Inferential Distance by (
This is a bit un-LW-ian, but: I’m earnestly happy for you. You sound, if not happier, more fulfilled than in your first post on this site. (Also, ambition is good.)