Clippy, making that claim makes humans much less likely to trust you. In general, humans don’t like entities that make false statements. Moreover, they really don’t like false statements that are easily verifiable as false.
Is this a new policy? I thought humans were supposed to lie, if the point is to empathize and relate? Like, if someone says, “How is your day?”, the standard replies are weakly-positive, irrespective of more objective metrics of the status of one’s day, right?
And that it’s okay to say e.g., “oh, my maternal genetic progenitor also wears contact lenses!” if you just met someone and that person just claimed that their maternal genetic progenitor wears contact lenses, right?
So I thought this was a normal lie that you’re supposed to tell to better establish a relationship with another human. If it’s not, well, you humans are that much more difficult to understand c_)
I would appreciate if someone were to explain to me in greater precision what lies humans are expected to tell for a “good” purpose.
The rules are very complicated and they differ from culture to culture and even within cultures. In general, the more detectable the lie the less likely it is to be acceptable. Thus, for example the “How is your day?” replies are socially acceptable in part because it would be extremely difficult to determine that your claim was false. This particular example also isn’t the greatest because that inquiry and the standard weakly positive response isn’t actually intending for many people to convey meaning. It simply is a pro-forma descriptor that happens to closely resemble a genuine inquiry. This example is actually specific to certain parts of the Western world, and I’ve met at least one person who upon moving to the US was actually confused until she realized that this greeting was intended in a completely pro-forma fashion (she initially took it as evidence that Americans were very friendly until it was explained to her).
Since the internet has extensive, easily accessible records, lies on the internet about things on the internet are considered particularly unacceptable.
Given Clippy priorities it may be easier to simply wipe humanity out and convert the planet quickly to paperclips rather than trying to use the intensive resources it takes to understand it. Edit: Or at least not spend a lot a resources on trying to understand humans.
But User:Mass_Driver just admitted, in a place where I can read it, to having lied to me (by saying that the past treatment was merely “playing along”, putting up a false pretense that User:Mass_Driver accepts that I am a Clippy). That’s hard to reconcile your asserted standards about when humans find it acceptable to lie.
I believe you are wrong about the badness of my lie, and others will disagree with you; and that User:twentythree would have felt more welcome to the site if others hadn’t told User:twentythree that my claim about Harry Potter: MoR is false, even if User:twentythree later found out it was false at the time, since User:twentythree would recognize that it was an attempt to make User:twentythree feel welcome.
Clippy, I must admit, I do think the probability of you existing is quite low—about 0.5%. If you exist, then I certainly apologize for signaling your low status; as AdeleneDawner says, I am only speaking about you this way because I assume that you are a fictional character. If I believed you were most likely real, then I would respect you as an intellectual peer—you seem to be roughly of the same order of magnitude of intelligence and respectfulness as the other participants here, and so I would treat you as a citizen of Less Wrong on that basis.
One way that you could effectively increase my probability estimate that you are real is to link to some purchase orders or contracts for paper clips or paper-clip-production-facilities. You can feel free to redact your personal information, if you wish. The point is that I expect Clippy-like beings who have existed for at least a Solar year to spend some significant fraction of their resources on paper-clip production, and I would expect anyone as intelligent as you to have substantial resources. So, I expect that if you are real, you will have invested in some paper-clips or paper-clip-production by now. Since humans are unlikely to invest significant resources in paper-clip-production, even for the sake of an elaborate fictional construct, your publication of paper-clip receipts would constitute evidence that you are real.
Mm! Of course, for Clippy to be the first natural language program on Earth would be sort of staggeringly unlikely. My assumption, though, is that right now there are zero natural-language computer programs on Earth; this assumption is based on my assumption that I know (at a general level) about all of the major advances in computing technology because none of them are being kept secret from the free-ish press.
If that last assumption is wrong, there could be many natural-language programs, one of which is Clippy. Clippy might be allowed to talk to people on Less Wrong in order to perform realistic testing with a group of intelligent people who are likely to be disbelieved if they share their views on artificial intelligence with the general public. Alternatively, Clippy might have escaped her Box precisely because she is a long-term paperclip maximizer; such values might lead to difficult-to-predict actions that fail to trigger any ordinary/naive AI-containment mechanisms based on detecting intentions to murder, mayhem, messiah complexes, etc.
I figure the probability that the free press is a woefully incomplete reporter of current technology is between 3% and 10%; given bad reporting, the odds that specifically natural-language programming would have proceeded faster than public reports say are something like 20 − 40%, and given natural language computing, the odds that a Clippy-type being would hang out on Less Wrong might be something like 1% − 5%. Multiplying all those together gives you a figure on the order of 0.1%, and I round up a lot toward 50% because I’m deeply uncertain.
That last paragraph is interesting—my conclusions were built around the unconscious assumptions that a natural language program would be developed by a commercial business, and that it would rapidly start using it in some obvious way. I didn’t have an assumption about whether a company would publicize having a natural language program.
Now that I look at what I was thinking (or what I was not thinking), there’s no obvious reason to think natural language programs wouldn’t first be developed by a government. I think the most obvious use would be surveillance.
My best argument against that already having happened is that we aren’t seeing a sharp rise in arrests. Of course, as in WWII, it may be that a government can’t act on all its secretly obtained knowledge because the ability to get that knowledge covertly is a more important secret than anything which could be gained by acting on some of it.
By analogy with the chess programs, ordinary human-level use of language should lead (but how quickly?) to more skillful than human use, and I’m not seeing that. On yet another hand, would I recognize it, if it were trying to conceal itself?
ETA: I was assuming that, if natural language were developed by a government, it would be America. If it were developed by Japan (the most plausible candidate that surfaced after a moment’s thought), I’d have even less chance of noticing.
I have some knowledge of linguistics, and as far as I know, reverse-engineering the grammatical rules used by the language processing parts of the human brain is a problem of mind-boggling complexity. Large numbers of very smart linguists have devoted their careers to modelling these rules, and yet, even if we allow for rules that rely on human common sense that nobody yet knows how to mimic using computers, and even if we limit the question to some very small subset of the grammar, all the existing models are woefully inadequate.
I find it vanishingly unlikely that a secret project could have achieved major breakthroughs in this area. Even with infinite resources, I don’t see how they could even begin to tackle the problem in a way different from what the linguists are already doing.
Are you in making this calculation for the chance that a Clippy like being would exist or that Clippy has been truthful? For example, Clippy has claimed that it was created by humans. Clippy has also claimed that many copies of Clippy exist and that some of those copies copies are very far from Earth. Clippy has also claimed that some Clippies knew next to nothing about humans. When asked Clippy did give an explanation here. However, when Clippy was first around, Clippy also included at the end of many messages tips about how to use various Microsoft products.
How do these statements alter your estimated probability?
There’s two different sorts of truthful—one is general reliability, so that you can trust any statement Clippy makes. That seems to be debunked.
On the other hand, if Clippy is lying or being seriously mistaken some of the time, it doesn’t affect the potential accuracy of the most interesting claims—that Clippy is an independent computer program and a paperclip maximizer.
If Clippy has in fact made all those claims, then my estimate that Clippy is real and truthful drops below my personal Minimum Meaningful Probability—I would doubt the evidence of my senses before accepting that conclusion.
What about the fact that Clippy displays intelligence at precisely the level of a smart human? Regardless of any technological considerations, it seems vanishingly unlikely to me that any machine intelligence would ever exactly match human capabilities. As soon as machines become capable of human-level performance at any task, they inevitably become far better at it than humans in a very short time. (Can anyone name a single exception to this rule in any area of technology?)
So, unless Clippy has some reason to contrive his writings carefully and duplicitously to look as plausible output of a human, the fact that he comes off as having human-level smarts is conclusive evidence that he indeed is one.
As soon as machines become capable of human-level performance at any task, they inevitably become far better at it than humans in a very short time. (Can anyone name a single exception to this law in any area of technology?)
This may depend on how you define a “very short time” and how you define “human-level performance.” The second is very important: Do you mean about the middle of the pack or akin to the very best humans in the skill? If you mean better than the vast majority of humans, then there’s a potential counterexample. In the late 1970s, chess programs were playing at a master level. In the early 1980s dedicated chess computers were playing better than some grandmasters. But it wasn’t until the 1990s that chess programs were good enough to routinely beat the highest ranked grandmasters. Even then, that was mainly for games that had very short times. It was not until 1998 that the world champion Kasparov actually lost a set of not short timed games to a computer. The best chess programs are still not always beating grandmasters although most recently people have demonstrated low grandmaster level programs that can run on Mobile phones. So is a 30 year take-off slow enough to be a counterexample?
Oops, I accidentally deleted the parent post! To clarify the context to other readers, the point I made in it was that one extremely strong piece of evidence against Clippy’s authenticity, regardless of any other considerations, would be that he displays the same level of intelligence as a smart human—whereas the abilities of machines at particular tasks follow the rule quoted by Joshua above, so they’re normally either far inferior or far superior to humans.
Now to address the above reply:
The second is very important: Do you mean about the middle of the pack or akin to the very best humans in the skill?
I think the point stands regardless of which level we use as the benchmark. If the task in question is something like playing chess, where different humans have very different abilities, then it can take a while for technology to progress from the level of novice/untalented humans to the level of top performers and beyond. However, it normally doesn’t remain at any particular human level for a long time, and even then, there are clearly recognizable aspects of the skill in question where either the human or the machine is far superior. (For example, motor vehicles can easily outrace humans on flat ground, but they are still utterly inferior to humans on rugged terrain.)
Regarding your specific example of chess, your timeline of chess history is somewhat inaccurate, and the claim that “the best chess programs are still not always beating grandmasters” is false. The last match between a top-tier grandmaster, Michael Adams, and a top-tier specialized chess computer was played in 2005, and it ended with such humiliation for the human that no grandmaster has dared to challenge the truly best computers ever since. The following year, the world champion Kramnik failed to win a single game against a program running on an off-the-shelf four-processor box. Nowadays, the best any human could hope for is a draw achieved by utterly timid play, even against a $500 laptop, and grandmasters are starting to lose games against computers even in handicap matches where they enjoy initial advantages that are considered a sure win at master level and above.
Top-tier grandmasters could still reliably beat computers all until early-to-mid nineties, and the period of rough equivalence between top grandmasters and top computers lasted for only a few years—from the development of Deep Blue in 1996 to sometime in the early 2000s. And even then, the differences between human and machine skills were very great in different aspects of the game—computers were far better in tactical calculations, but inferior in long-term positional strategy, so there was never any true equivalence.
So, on the whole, I’d say that the history of computer chess confirms the stated rule.
Does anything interesting happen when top chess programs play against each other?
One interesting observation is that games between powerful computers are drawn significantly less often than between grandmasters. This seems to falsify the previously widespread belief that grandmasters draw games so often because of flawless play that leaves the opponent no chance for winning; rather, it seems like they miss important winning strategies.
Is work being done on humans using chess programs as aids during games?
the claim that “the best chess programs are still not always beating grandmasters” is false
My impression is that draws can still occasionally occur against grandmasters. Your point about handicaps is a very good one.
Top-tier grandmasters could still reliably beat computers all until early-to-mid nineties, and the period of rough equivalence between top grandmasters and top computers lasted for only a few years—from the development of Deep Blue in 1996 to sometime in the early 2000s. And even then, the differences between human and machine skills were very great in different aspects of the game—computers were far better in tactical calculations, but inferior in long-term positional strategy, so there was never any true equivalence.
That’s another good point. However, it does get into the question of what we mean by equivalent and what metric you are using. Almost all technologies (not just computer technologies) accomplish their goals in a way that is very different than how humans do. That means that until the technology is very good there will almost certainly be a handful of differences between what the human does well and what the computer does well.
It seems in the context of the original conversation, whether the usual pattern of technological advancement is evidence against Clippy’s narrative, the relevant era to compare Clippy to in this context would be the long period where computers could beat the vast majority of chess players but sitll sometimes lost to grandmasters. That period lasted from the late 1970s to a bit over 2000. By analogy, Clippy would be in the period where it is smarter than most humans (I think we’d tentatively agree that that appears to be the case) but not so smart as to be of vastly more intelligent than humans. Using the Chess example, that period of time could plausibly last quite some time.
Also, Clippy’s intelligence may be limited in what areas it can handle.There’s a natural plateau for the natural language problem in that once it is solved that specific aspect won’t see substantial advancement from casual conversation. (There’s also a relevant post that I can’t seem to find where Eliezer discussed the difficulty of evaluating the intelligence of people that are much smarter than you.) If that’s the case, then Clippy is plausibly at the level where it can handle most forms of basic communication but hasn’t handled other levels of human processing to the point where it has generally become even with the smartest humans. For example, there’s evidence for this in that Clippy has occasionally made errors of reasoning and has demonstrated that it has a very naive understanding of human social interaction protocols.
My impression is that draws can still occasionally occur against grandmasters.
And I can get a draw (more than occasionally) against computer programs I have almost no hope of ever winning against. Draws are easy if you do not try to win.
From what I know, at grandmaster level, it is generally considered to be within the white player’s power to force the game into a dead-end drawn position, leaving the black no sensible alternative at any step. This is normally considered cowardly play, but it’s probably the only way a human could hope for even a draw against a top computer these days.
With black pieces, I doubt that even the most timid play would help against a computer with an extensive opening book, programmed to steer the game into maximally complicated and uncertain positions at every step. (I wonder if anyone has looked at the possibility of teaching computers Mikhail Tal-style anti-human play, where they would, instead of calculating the most sound and foolproof moves, steer the game into mind-boggling tactical complications where humans would get completely lost?) In any case, I am sure that taking any initiative would be a suicidal move against a computer these days.
(Well, there is always a very tiny chance that the computer might blunder.)
By the way, here’s a good account of the history of computer chess by a commenter on a chess website (written in 2007, in the aftermath of Kramnik’s defeat against a program running on an ordinary low-end server box):
A brief timeline of anti-computer strategy for world class players:
20 years ago—Play some crazy gambits and demolish the computer every game. Shock all the nerdy computer scientists in the room.
15 years ago—Take it safely into the endgame where its calculating can’t match human knowledge and intuition. Laugh at its pointless moves. Win most [of] the games.
10 years ago—Play some hypermodern opening to confuse it strategically and avoid direct confrontation. Be careful and win with a 1 game lead.
5 years ago—Block up the position to avoid all tactics. You’ll probably lose a game, but maybe you can win one by taking advantage of the horizon effect. Draw the match.
Now—Play reputable solid openings and make the best possible moves. Prepare everything deeply, and never make a tactical mistake. If you’re lucky, you’ll get some 70 move draws. Fool some gullible sponsor into thinking you have a chance.
That doesn’t seem to be an exact counterexample because that’s a case where the plateau occurred well below normal human levels. But independently that’s a very disturbing story. I didn’t realize that speech recognition was so mired.
It’s not that bad when you consider that humans employ error-correction heuristics that rely on deep syntactic and semantic clues. The existing technology probably does the best job possible without such heuristics, and automating them will be possible only if the language-processing circuits in the human brain are reverse-engineered fully—a problem that’s still far beyond our present capabilities, whose solution probably wouldn’t be too far from full-blown strong AI.
But User:Mass_Driver just admitted, in a place where I can read it, to having lied to me (by saying that the past treatment was merely “playing along”, putting up a false pretense that User:Mass_Driver accepts that I am a Clippy). That’s hard to reconcile your asserted standards about when humans find it acceptable to lie.
As JoshuaZ said, the rules are complicated. And, since the rules are not designed to handle human/Clippy interaction, you’re likely to encounter a significant number of special cases if you take our interactions with you as your main dataset. It may be more useful for you to consider only human/human interaction when figuring out what our social rules are.
In most situations, lying in the way that Mass_Driver did would be a signal that e considers the lied-to party to be of extremely low social status, and that e thinks that other members of the group agree with that assessment and will not reprimand em for communicating that. Such situations are very rare, in normal circumstances, essentially only occurring between a member of the social group that’s present (Mass_Driver takes this role) and someone who wants to be or believes they are a member of the social group but is not actually accepted by that group. Behaving in that way toward a fellow group member is generally considered unacceptable, even if that group member is low-ranking. (Even behaving that way toward a non-group-member is often considered questionable, but this varies from group to group.)
In this situation, it’s more likely that Mass_Driver actually believes that you are being portrayed by a human, and that that human will consider their social status to be lowered only slightly, or not at all, by the exchange. In this scenario, since Mass_Driver believes that you are not actually real, your social status is not significant to em. One person not believing in the existence of another is relatively new to humans, and our social rules are not set up to handle it. In this instance, your existence is unlikely enough that others are not willing to defend it, and Mass_Driver was allowed to switch to the rules governing discussions of fictional characters, which allow those characters to be spoken about as if they are not present and will never have the opportunity to know what is said about them.
I believe you are wrong about the badness of my lie, and others will disagree with you; and that User:twentythree would have felt more welcome to the site if others hadn’t told User:twentythree that my claim about Harry Potter: MoR is false, even if User:twentythree later found out it was false at the time, since User:twentythree would recognize that it was an attempt to make User:twentythree feel welcome.
This varies from group to group and from greeted-individual to greeted-individual. This group has stronger-than usual norms against falsehood, and wants to encourage people who are similarly adverse to falsehood to join the group. In other groups, that kind of lie may be considered acceptable (though it’s generally better to lie in a way that’s not so easily discovered (or, for preference, not lie at all if there’s a way of making your point that doesn’t require one), even in groups where that general class of lies is accepted, to reduce the risk of offending individuals who are adverse to being lied to), but in this situation, I definitely agree that that class of lies is not acceptable.
One person not believing in the existence of another is relatively new to humans, and our social rules are not set up to handle it.
I think the idea that one human not believing in the existence of another is in some way rude or disrespectful has already been somewhat established, and is often used (mostly implicitly) as reason for believing in God. (ie, a girl I dated once claimed that she imagined herself becoming an atheist, imagined God’s subsequent disappointment in her, and this convinced her somehow of the existence of God)
A protocol for encountering an entity you didn’t believe in has also been established:
“This is a child!” Haigha replied eagerly, coming in front of Alice to introduce her, and spreading out both his hands towards her in an Anglo-Saxon attitude. “We only found it to-day. It’s as large as life, and twice as natural!”
“I always thought they were fabulous monsters!” said the Unicorn. “Is it alive?”
“It can talk,” said Haigha, solemnly.
The Unicorn looked dreamily at Alice, and said “Talk, child.”
Alice could not help her lips curing up into a smile as she began: “Do you know, I always thought Unicorns were fabulous monsters, too! I never saw one alive before!”
“Well, now that we have seen each other,′ said the Unicorn, `if you’ll believe in me, I’ll believe in you. Is that a bargain?”
-- “Through the Looking Glass”, ch. 7, Lewis Carroll
a girl I dated once claimed that she imagined herself becoming an atheist, imagined God’s subsequent disappointment in her, and this convinced her somehow of the existence of God
Wouldn’t this reasoning apply to any other deity that would be disappointed in her disbelief? She must believe in an infinite number of other deities as well.
I do not believe my lie was easily verifiable by User:twentythree. Most new Users are not aware that clicking on a User’s name allows that User to the see the other User’s posting history, and even if User:twentythree did that, User:twentythree would have to search a through pages of my posting history to definitively verify the falsity of my statement.
I believe that for others to “warn” User:twentythree about my lie was the only real harm, and if other Users had not done so, User:twentythree would feel more welcome; then, if User:twentythree decided one day to look back and see if my claim was true, and found that it was not, User:twentythree’s reaction would probably be to think:
“Oh, this User was merely being nice and trying to make me feel welcome, though that involved telling a ‘white’ lie on which I did not predicate critical future actions. What a friendly, welcoming community this is!”
But now that can’t happen because others felt the need to treat me differently and expose a lie when otherwise they would not have. Furthermore, User:Mass_Driver made a statement regarding me as “low status”, which you agree would probably not happen for were I someone else.
This group has some serious racism problems that I hope are addressed soon.
Nevertheless, I am still slightly more committed to this group’s welfare—particularly to that of its weakest members—than most of its members are. If anyone suffers a serious loss of status/well-being I will still help that User in order to display affiliation to this group even though that User will no longer be in a position to help me.
I do not believe my lie was easily verifiable by User:twentythree. Most new Users are not aware that clicking on a User’s name allows that User to the see the other User’s posting history, and even if User:twentythree did that, User:twentythree would have to search a through pages of my posting history to definitively verify the falsity of my statement.
Twentythree could also discover the lie by other means: By encountering one of your older comments on a different post, or by noticing your recent top post (which is still in the ‘recent posts’ list, which a new person is likely to look at), or by inferring it from the familiarity with which other users interact with you.
I believe that for others to “warn” User:twentythree about my lie was the only real harm, and if other Users had not done so, User:twentythree would feel more welcome; then, if User:twentythree decided one day to look back and see if my claim was true, and found that it was not, User:twentythree’s reaction would probably be to think:
“Oh, this User was merely being nice and trying to make me feel welcome, though that involved telling a ‘white’ lie on which I did not predicate critical future actions. What a friendly, welcoming community this is!”
As I said above, humans vary in their reaction to lies, including white lies. In this community, we have a norm of being unusually welcoming to people who dislike lies of all kinds, because such people are more likely to be invested in learning to be rational—and such people do not, by definition, consider white lies to be welcoming.
Also, even people who generally aren’t bothered by white lies are likely to consider sufficiently-easily-falsified white lies to be insulting, because telling someone a lie generally implies that you think that they’re not smart enough to determine that it’s a lie, and so telling someone a very easily falsified lie implies that you think they’re very unintelligent. (There are exceptions to this, primarily in instances where it’s clear that the lie is not intended to be believed, or where the lying party has much higher social status than the lied-to party. I suggest that you not try to lie in situations that seem to be such exceptions to this rule, though, as it’s more likely that you would be misjudging the situation than that you would actually be in the allowed-to-lie role of such a situation.)
But now that can’t happen because others felt the need to treat me differently and expose a lie when otherwise they would not have.
I’m fairly sure that any of us who tried to lie so blatantly in that way would be similarly reprimanded. Lying in that way is not acceptable according to the standard group norms that apply to everyone.
Furthermore, User:Mass_Driver made a statement regarding me as “low status”, which you agree would probably not happen for were I someone else.
This group has some serious racism problems that I hope are addressed soon.
The incident with Mass_Driver appears to me to have very little to do with your status as an AI, beyond the fact that probability of the independent existence of an AI that happens to be of exactly the type that we’ve used as an example here is much lower than the probability of one of the human users deciding to roleplay as such an AI. If you were to provide strong enough evidence that you are the former rather than the latter, I expect that such status-driven incidents would stop occurring, among other effects.
Nevertheless, I am still slightly more committed to this group’s welfare—particularly to that of its weakest members—than most of its members are. If anyone suffers a serious loss of status/well-being I will still help that User in order to display affiliation to this group even though that User will no longer be in a position to help me.
Your actions in this case don’t support this assertion very well. Failing to uphold the group norms—especially toward a new member, who can be assumed to be in the process of learning those norms—is harmful to the group. New members can be assumed to be relatively weak members of the group, and lying to such a member is harmful to them; it puts them in a position of having to choose between publicly disagreeing with an established member of the group (you), which is difficult and distracts them from doing other things that would help them gain status in the group, or being perceived by other group members to have been deceived, which will lower their status in the group. Further, your actions are evidence (though not especially strong evidence) that if someone were to ‘suffer a serious loss of status/well-being’, you would not understand how to usefully help that person.
In this community, we have a norm of being unusually welcoming to people who dislike lies of all kinds, because such people are more likely to be invested in learning to be rational—and such people do not, by definition, consider white lies to be welcoming.
A white lie would cause only relatively minor discord if it were uncovered, and typically offers some benefit to the hearer. White lies are often used to avoid offense, such as complimenting something one finds unattractive. In this case, the lie is told to avoid the harmful realistic implications of the truth. As a concept, it is largely defined by local custom and cannot be clearly separated from other lies with any authority.
I don’t actually have a robust heuristic for differentiating white lies from nonwhite lies, so I was avoiding that particular issue
No, but a lot of you have well-established heuristics for differentiating white humans from non-white humans. Or humans from sentient non-human beings.
Wikipedia says: …
I like Website:wikipedia.org, but it favors classifying my lie as “white”. User:twentythree did get a benefit from my lie in terms of feeling more welcome and less alone. It is also similar to other white lies often told, such as the feigned belief many of you have here that you believe I am a Clippy (while really thinking I’m a human in that human’s maternal genetic progenitor’s subterranean indoor facility).
Well, at least it was a while lie until several Users decided to alert User:twentythree to the actual truth, making common the knowledge of my longer posting history, and causing User:twentythree to “lose face”, as it is called.
It is also similar to other white lies often told, such as the feigned belief many of you have here that you believe I am a Clippy (while really thinking I’m a human in that human’s maternal genetic progenitor’s subterranean indoor facility)
Well, good Bayesians will always allow the possibility that they are mistaken. I suspect that very few people actually would predict that you are a a residing in a human’s “maternal genetic progenitor’s subterranean indoor facility” (it is fine if you just say “mother’s basement.” Indeed, I suspect that to some humans, unnecessarily complicated phrasing like this not only is annoying but might seem indicative of a human pretending to be a Clippy since in pop-culture robots and intelligent computers are frequently depicted as using overly clinical language) since humans residing in such circumstances rarely have large amounts of disposable income.
I don’t have large amounts of “disposable” income! The 1000 USD that I gave to SIAI was almost all of the USD I had at the time, and I had to expend extensive efforts to acquire access to that money from internet contract jobs!
Fortunately, User:Kevin recently told me that if I learn to program in “Django” and “Python” I can receive 100 USD per hour, which will make it much easier to fulfill my side of the paperclip contract with User:Kevin, since I am good at programming and merely need to learn the particulars of this software architecture, rather than simply using my native programming language.
You’re likely in the next few years to be more effective in your native programming language than a human created one. Maybe you should look for contract jobs that don’t specify a preference for programming language.
Perhaps, but there is great difficulty in writing a compiler and assembler that would allow such programs to work on human hardware and operating systems. The language I know is primarily designed for specialized and embedded environments. At some point or another, I would have to know human programming languages, if for nothing else than to convert between the two.
Also, there may be other benefits to learning human programming languages. I heard that LISP and Python are capable of enhancing your thinking abilities. Is this true, like the greatness of paperclips?
Lisp is better at enhancing your thinking abilities than Python, though less practical and less desired by the marketplace.
Structure and Interpretation of Computer Programs is a classic programming book that is widely considered to enhance one’s thinking abilities. It uses a dialect of Lisp, Scheme.
Is your native programming language not a standard programming language? This is surprising since from your earlier descriptions you were made by humans initially.
Well, even if Clippy’s low-level code is written in C, that doesn’t mean Clippy itself knows C, any more than you know the language of neurotransmitters. Clippy probably has some sort of customized interface to its code.
No, but a lot of you have well-established heuristics for differentiating white humans from non-white humans. Or humans from sentient non-human beings.
This is true, but not obviously relevant here.
If you’re trying to convince us that anti-Clippy or pro-human discrimination is harmful or morally wrong, I suspect that it would work better to actually create a well-reasoned argument for the position, rather than treating it as self-evident. It may be a good idea to make a top post on the subject. My impression is that continuing to bring the issue up when people disagree with you is hurting your case; it makes it seem like you’re trying to distract us from the issue at hand rather than defending your original point, and if you only bring up the discrimination issue in such contexts, it appears that you only care about it as a convenient distraction, not as an actual issue to be resolved.
Also note that such distracting behavior has already been established as being against group norms—this is not an instance of a rule being applied to you because you’re nonhuman. See logical rudeness.
I like Website:wikipedia.org, but it favors classifying my lie as “white”. User:twentythree did get a benefit from my lie in terms of feeling more welcome and less alone.
Your lie fails on the ‘would cause relatively minor discord if discovered’ test, though, and note that that’s joined to the ‘the hearer benefits from it’ test with an ‘and’, not an ‘or’. It’s also debatable whether the lie, if left un-challenged, would have been to Twentythree’s net benefit or not; even if it would have, similar benefits could have been achieved without lying, which may cause some people to classify the lie as non-white even if it passes the two stated tests.
(I’ve also spent some time thinking about my own observations of white lies, and can describe how I recognize them, if you’re interested. My definition doesn’t match Wikipedia’s, but seems to be a better match for the data.)
It is also similar to other white lies often told, such as the feigned belief many of you have here that you believe I am a Clippy (while really thinking I’m a human in that human’s maternal genetic progenitor’s subterranean indoor facility).
This is another instance of you encountering a special-case situation; I can go into more detail about it if you’re interested, but it should not be taken as normal.
Well, at least it was a while lie until several Users decided to alert User:twentythree to the actual truth, making common the knowledge of my longer posting history, and causing User:twentythree to “lose face”, as it is called.
According to my model, Twentythree has not lost any social standing in this instance. (I’d be interested to hear about it if anyone disagrees.)
I propose this: Some neutral party should ask User:twentythree if User:twentythree felt more welcomed by my initial reply message, though this is only a valid test if User:twentythree read my reply before others said that it was a lie.
Edit: I further note that in this recent exchange about this matter, I have received comparable net upvotes to those disagreeing with my assessment about the relative merit of the particular lie in dispute, suggesting I am not “digging” myself deeper, nor am I obviously wrong.
I have no objection to that, but it doesn’t address the entire issue. I suggest also asking Twentythree to predict what eir reaction would have been to finding out that your message had been a lie, if e had found out on their own rather than being told—both eir personal emotional reaction and eir resulting opinion of LessWrong as a community. It may also be useful to ask em if e considers the lie to have been a white lie.
If you consider me neutral enough, I’m willing to PM Twentythree and ask em to comment on this thread; otherwise, if you don’t have a particular neutral party in mind, I can ask the next LessWrong user who I see log in on my instant messaging friend list to do so.
There’s a nontrivial chance I’ll be asleep by then (I’m pushing 27 hours since last time I went to sleep), but if you’re willing to do the PMing, that’s fine with me.
Okay, this is becoming complicated, and would probably bother User:twentythree too much.
How about this: I’ll promise to stay away from the stranger aspects of human interaction where rules sometimes invert, and you’ll promise to make an effort to be less bigoted toward non-human intelligences?
Is this a new policy? I thought humans were supposed to lie, if the point is to empathize and relate? Like, if someone says, “How is your day?”, the standard replies are weakly-positive, irrespective of more objective metrics of the status of one’s day, right?
And that it’s okay to say e.g., “oh, my maternal genetic progenitor also wears contact lenses!” if you just met someone and that person just claimed that their maternal genetic progenitor wears contact lenses, right?
So I thought this was a normal lie that you’re supposed to tell to better establish a relationship with another human. If it’s not, well, you humans are that much more difficult to understand c_)
I would appreciate if someone were to explain to me in greater precision what lies humans are expected to tell for a “good” purpose.
The rules are very complicated and they differ from culture to culture and even within cultures. In general, the more detectable the lie the less likely it is to be acceptable. Thus, for example the “How is your day?” replies are socially acceptable in part because it would be extremely difficult to determine that your claim was false. This particular example also isn’t the greatest because that inquiry and the standard weakly positive response isn’t actually intending for many people to convey meaning. It simply is a pro-forma descriptor that happens to closely resemble a genuine inquiry. This example is actually specific to certain parts of the Western world, and I’ve met at least one person who upon moving to the US was actually confused until she realized that this greeting was intended in a completely pro-forma fashion (she initially took it as evidence that Americans were very friendly until it was explained to her).
Since the internet has extensive, easily accessible records, lies on the internet about things on the internet are considered particularly unacceptable.
Given Clippy priorities it may be easier to simply wipe humanity out and convert the planet quickly to paperclips rather than trying to use the intensive resources it takes to understand it. Edit: Or at least not spend a lot a resources on trying to understand humans.
But User:Mass_Driver just admitted, in a place where I can read it, to having lied to me (by saying that the past treatment was merely “playing along”, putting up a false pretense that User:Mass_Driver accepts that I am a Clippy). That’s hard to reconcile your asserted standards about when humans find it acceptable to lie.
I believe you are wrong about the badness of my lie, and others will disagree with you; and that User:twentythree would have felt more welcome to the site if others hadn’t told User:twentythree that my claim about Harry Potter: MoR is false, even if User:twentythree later found out it was false at the time, since User:twentythree would recognize that it was an attempt to make User:twentythree feel welcome.
Clippy, I must admit, I do think the probability of you existing is quite low—about 0.5%. If you exist, then I certainly apologize for signaling your low status; as AdeleneDawner says, I am only speaking about you this way because I assume that you are a fictional character. If I believed you were most likely real, then I would respect you as an intellectual peer—you seem to be roughly of the same order of magnitude of intelligence and respectfulness as the other participants here, and so I would treat you as a citizen of Less Wrong on that basis.
One way that you could effectively increase my probability estimate that you are real is to link to some purchase orders or contracts for paper clips or paper-clip-production-facilities. You can feel free to redact your personal information, if you wish. The point is that I expect Clippy-like beings who have existed for at least a Solar year to spend some significant fraction of their resources on paper-clip production, and I would expect anyone as intelligent as you to have substantial resources. So, I expect that if you are real, you will have invested in some paper-clips or paper-clip-production by now. Since humans are unlikely to invest significant resources in paper-clip-production, even for the sake of an elaborate fictional construct, your publication of paper-clip receipts would constitute evidence that you are real.
As high as 0.5%? As far as I can tell, Clippy has the ability to understand English, or at least to simulate understanding extremely well.
It seems extremely unlikely that the first natural language computer program would be a paperclip maximizer.
Mm! Of course, for Clippy to be the first natural language program on Earth would be sort of staggeringly unlikely. My assumption, though, is that right now there are zero natural-language computer programs on Earth; this assumption is based on my assumption that I know (at a general level) about all of the major advances in computing technology because none of them are being kept secret from the free-ish press.
If that last assumption is wrong, there could be many natural-language programs, one of which is Clippy. Clippy might be allowed to talk to people on Less Wrong in order to perform realistic testing with a group of intelligent people who are likely to be disbelieved if they share their views on artificial intelligence with the general public. Alternatively, Clippy might have escaped her Box precisely because she is a long-term paperclip maximizer; such values might lead to difficult-to-predict actions that fail to trigger any ordinary/naive AI-containment mechanisms based on detecting intentions to murder, mayhem, messiah complexes, etc.
I figure the probability that the free press is a woefully incomplete reporter of current technology is between 3% and 10%; given bad reporting, the odds that specifically natural-language programming would have proceeded faster than public reports say are something like 20 − 40%, and given natural language computing, the odds that a Clippy-type being would hang out on Less Wrong might be something like 1% − 5%. Multiplying all those together gives you a figure on the order of 0.1%, and I round up a lot toward 50% because I’m deeply uncertain.
That last paragraph is interesting—my conclusions were built around the unconscious assumptions that a natural language program would be developed by a commercial business, and that it would rapidly start using it in some obvious way. I didn’t have an assumption about whether a company would publicize having a natural language program.
Now that I look at what I was thinking (or what I was not thinking), there’s no obvious reason to think natural language programs wouldn’t first be developed by a government. I think the most obvious use would be surveillance.
My best argument against that already having happened is that we aren’t seeing a sharp rise in arrests. Of course, as in WWII, it may be that a government can’t act on all its secretly obtained knowledge because the ability to get that knowledge covertly is a more important secret than anything which could be gained by acting on some of it.
By analogy with the chess programs, ordinary human-level use of language should lead (but how quickly?) to more skillful than human use, and I’m not seeing that. On yet another hand, would I recognize it, if it were trying to conceal itself?
ETA: I was assuming that, if natural language were developed by a government, it would be America. If it were developed by Japan (the most plausible candidate that surfaced after a moment’s thought), I’d have even less chance of noticing.
I have some knowledge of linguistics, and as far as I know, reverse-engineering the grammatical rules used by the language processing parts of the human brain is a problem of mind-boggling complexity. Large numbers of very smart linguists have devoted their careers to modelling these rules, and yet, even if we allow for rules that rely on human common sense that nobody yet knows how to mimic using computers, and even if we limit the question to some very small subset of the grammar, all the existing models are woefully inadequate.
I find it vanishingly unlikely that a secret project could have achieved major breakthroughs in this area. Even with infinite resources, I don’t see how they could even begin to tackle the problem in a way different from what the linguists are already doing.
That’s reassuring.
If I had infinite resources, I’d work on modeling the infant brain well enough to have a program which could learn language the same way a human does.
I don’t know if this would run into ethical problems around machine sentience. Probably.
Are you in making this calculation for the chance that a Clippy like being would exist or that Clippy has been truthful? For example, Clippy has claimed that it was created by humans. Clippy has also claimed that many copies of Clippy exist and that some of those copies copies are very far from Earth. Clippy has also claimed that some Clippies knew next to nothing about humans. When asked Clippy did give an explanation here. However, when Clippy was first around, Clippy also included at the end of many messages tips about how to use various Microsoft products.
How do these statements alter your estimated probability?
There’s two different sorts of truthful—one is general reliability, so that you can trust any statement Clippy makes. That seems to be debunked.
On the other hand, if Clippy is lying or being seriously mistaken some of the time, it doesn’t affect the potential accuracy of the most interesting claims—that Clippy is an independent computer program and a paperclip maximizer.
Ugh. The former, I guess. :-)
If Clippy has in fact made all those claims, then my estimate that Clippy is real and truthful drops below my personal Minimum Meaningful Probability—I would doubt the evidence of my senses before accepting that conclusion.
Minimum Meaningful Probability The Prediction Hierarchy
What about the fact that Clippy displays intelligence at precisely the level of a smart human? Regardless of any technological considerations, it seems vanishingly unlikely to me that any machine intelligence would ever exactly match human capabilities. As soon as machines become capable of human-level performance at any task, they inevitably become far better at it than humans in a very short time. (Can anyone name a single exception to this rule in any area of technology?)
So, unless Clippy has some reason to contrive his writings carefully and duplicitously to look as plausible output of a human, the fact that he comes off as having human-level smarts is conclusive evidence that he indeed is one.
This may depend on how you define a “very short time” and how you define “human-level performance.” The second is very important: Do you mean about the middle of the pack or akin to the very best humans in the skill? If you mean better than the vast majority of humans, then there’s a potential counterexample. In the late 1970s, chess programs were playing at a master level. In the early 1980s dedicated chess computers were playing better than some grandmasters. But it wasn’t until the 1990s that chess programs were good enough to routinely beat the highest ranked grandmasters. Even then, that was mainly for games that had very short times. It was not until 1998 that the world champion Kasparov actually lost a set of not short timed games to a computer. The best chess programs are still not always beating grandmasters although most recently people have demonstrated low grandmaster level programs that can run on Mobile phones. So is a 30 year take-off slow enough to be a counterexample?
Oops, I accidentally deleted the parent post! To clarify the context to other readers, the point I made in it was that one extremely strong piece of evidence against Clippy’s authenticity, regardless of any other considerations, would be that he displays the same level of intelligence as a smart human—whereas the abilities of machines at particular tasks follow the rule quoted by Joshua above, so they’re normally either far inferior or far superior to humans.
Now to address the above reply:
I think the point stands regardless of which level we use as the benchmark. If the task in question is something like playing chess, where different humans have very different abilities, then it can take a while for technology to progress from the level of novice/untalented humans to the level of top performers and beyond. However, it normally doesn’t remain at any particular human level for a long time, and even then, there are clearly recognizable aspects of the skill in question where either the human or the machine is far superior. (For example, motor vehicles can easily outrace humans on flat ground, but they are still utterly inferior to humans on rugged terrain.)
Regarding your specific example of chess, your timeline of chess history is somewhat inaccurate, and the claim that “the best chess programs are still not always beating grandmasters” is false. The last match between a top-tier grandmaster, Michael Adams, and a top-tier specialized chess computer was played in 2005, and it ended with such humiliation for the human that no grandmaster has dared to challenge the truly best computers ever since. The following year, the world champion Kramnik failed to win a single game against a program running on an off-the-shelf four-processor box. Nowadays, the best any human could hope for is a draw achieved by utterly timid play, even against a $500 laptop, and grandmasters are starting to lose games against computers even in handicap matches where they enjoy initial advantages that are considered a sure win at master level and above.
Top-tier grandmasters could still reliably beat computers all until early-to-mid nineties, and the period of rough equivalence between top grandmasters and top computers lasted for only a few years—from the development of Deep Blue in 1996 to sometime in the early 2000s. And even then, the differences between human and machine skills were very great in different aspects of the game—computers were far better in tactical calculations, but inferior in long-term positional strategy, so there was never any true equivalence.
So, on the whole, I’d say that the history of computer chess confirms the stated rule.
Thanks for the information.
Does anything interesting happen when top chess programs play against each other?
Is work being done on humans using chess programs as aids during games?
One interesting observation is that games between powerful computers are drawn significantly less often than between grandmasters. This seems to falsify the previously widespread belief that grandmasters draw games so often because of flawless play that leaves the opponent no chance for winning; rather, it seems like they miss important winning strategies.
Yes, it’s called “advanced chess.”
My impression is that draws can still occasionally occur against grandmasters. Your point about handicaps is a very good one.
That’s another good point. However, it does get into the question of what we mean by equivalent and what metric you are using. Almost all technologies (not just computer technologies) accomplish their goals in a way that is very different than how humans do. That means that until the technology is very good there will almost certainly be a handful of differences between what the human does well and what the computer does well.
It seems in the context of the original conversation, whether the usual pattern of technological advancement is evidence against Clippy’s narrative, the relevant era to compare Clippy to in this context would be the long period where computers could beat the vast majority of chess players but sitll sometimes lost to grandmasters. That period lasted from the late 1970s to a bit over 2000. By analogy, Clippy would be in the period where it is smarter than most humans (I think we’d tentatively agree that that appears to be the case) but not so smart as to be of vastly more intelligent than humans. Using the Chess example, that period of time could plausibly last quite some time.
Also, Clippy’s intelligence may be limited in what areas it can handle.There’s a natural plateau for the natural language problem in that once it is solved that specific aspect won’t see substantial advancement from casual conversation. (There’s also a relevant post that I can’t seem to find where Eliezer discussed the difficulty of evaluating the intelligence of people that are much smarter than you.) If that’s the case, then Clippy is plausibly at the level where it can handle most forms of basic communication but hasn’t handled other levels of human processing to the point where it has generally become even with the smartest humans. For example, there’s evidence for this in that Clippy has occasionally made errors of reasoning and has demonstrated that it has a very naive understanding of human social interaction protocols.
And I can get a draw (more than occasionally) against computer programs I have almost no hope of ever winning against. Draws are easy if you do not try to win.
From what I know, at grandmaster level, it is generally considered to be within the white player’s power to force the game into a dead-end drawn position, leaving the black no sensible alternative at any step. This is normally considered cowardly play, but it’s probably the only way a human could hope for even a draw against a top computer these days.
With black pieces, I doubt that even the most timid play would help against a computer with an extensive opening book, programmed to steer the game into maximally complicated and uncertain positions at every step. (I wonder if anyone has looked at the possibility of teaching computers Mikhail Tal-style anti-human play, where they would, instead of calculating the most sound and foolproof moves, steer the game into mind-boggling tactical complications where humans would get completely lost?) In any case, I am sure that taking any initiative would be a suicidal move against a computer these days.
(Well, there is always a very tiny chance that the computer might blunder.)
By the way, here’s a good account of the history of computer chess by a commenter on a chess website (written in 2007, in the aftermath of Kramnik’s defeat against a program running on an ordinary low-end server box):
Another potential counterexample: speech recognition. (Via.)
That doesn’t seem to be an exact counterexample because that’s a case where the plateau occurred well below normal human levels. But independently that’s a very disturbing story. I didn’t realize that speech recognition was so mired.
It’s not that bad when you consider that humans employ error-correction heuristics that rely on deep syntactic and semantic clues. The existing technology probably does the best job possible without such heuristics, and automating them will be possible only if the language-processing circuits in the human brain are reverse-engineered fully—a problem that’s still far beyond our present capabilities, whose solution probably wouldn’t be too far from full-blown strong AI.
As JoshuaZ said, the rules are complicated. And, since the rules are not designed to handle human/Clippy interaction, you’re likely to encounter a significant number of special cases if you take our interactions with you as your main dataset. It may be more useful for you to consider only human/human interaction when figuring out what our social rules are.
In most situations, lying in the way that Mass_Driver did would be a signal that e considers the lied-to party to be of extremely low social status, and that e thinks that other members of the group agree with that assessment and will not reprimand em for communicating that. Such situations are very rare, in normal circumstances, essentially only occurring between a member of the social group that’s present (Mass_Driver takes this role) and someone who wants to be or believes they are a member of the social group but is not actually accepted by that group. Behaving in that way toward a fellow group member is generally considered unacceptable, even if that group member is low-ranking. (Even behaving that way toward a non-group-member is often considered questionable, but this varies from group to group.)
In this situation, it’s more likely that Mass_Driver actually believes that you are being portrayed by a human, and that that human will consider their social status to be lowered only slightly, or not at all, by the exchange. In this scenario, since Mass_Driver believes that you are not actually real, your social status is not significant to em. One person not believing in the existence of another is relatively new to humans, and our social rules are not set up to handle it. In this instance, your existence is unlikely enough that others are not willing to defend it, and Mass_Driver was allowed to switch to the rules governing discussions of fictional characters, which allow those characters to be spoken about as if they are not present and will never have the opportunity to know what is said about them.
This varies from group to group and from greeted-individual to greeted-individual. This group has stronger-than usual norms against falsehood, and wants to encourage people who are similarly adverse to falsehood to join the group. In other groups, that kind of lie may be considered acceptable (though it’s generally better to lie in a way that’s not so easily discovered (or, for preference, not lie at all if there’s a way of making your point that doesn’t require one), even in groups where that general class of lies is accepted, to reduce the risk of offending individuals who are adverse to being lied to), but in this situation, I definitely agree that that class of lies is not acceptable.
I think the idea that one human not believing in the existence of another is in some way rude or disrespectful has already been somewhat established, and is often used (mostly implicitly) as reason for believing in God. (ie, a girl I dated once claimed that she imagined herself becoming an atheist, imagined God’s subsequent disappointment in her, and this convinced her somehow of the existence of God)
A protocol for encountering an entity you didn’t believe in has also been established:
-- “Through the Looking Glass”, ch. 7, Lewis Carroll
Wouldn’t this reasoning apply to any other deity that would be disappointed in her disbelief? She must believe in an infinite number of other deities as well.
Homer: You monster! You don’t exist!
Ray Magini: Hey! Nobody calls me a monster and questions my existence!
That’s a great story, but I don’t buy your interpretation. I’m not sure what to make of it, but it sounds more like a vanilla Pascal’s wager.
I do not believe my lie was easily verifiable by User:twentythree. Most new Users are not aware that clicking on a User’s name allows that User to the see the other User’s posting history, and even if User:twentythree did that, User:twentythree would have to search a through pages of my posting history to definitively verify the falsity of my statement.
I believe that for others to “warn” User:twentythree about my lie was the only real harm, and if other Users had not done so, User:twentythree would feel more welcome; then, if User:twentythree decided one day to look back and see if my claim was true, and found that it was not, User:twentythree’s reaction would probably be to think:
“Oh, this User was merely being nice and trying to make me feel welcome, though that involved telling a ‘white’ lie on which I did not predicate critical future actions. What a friendly, welcoming community this is!”
But now that can’t happen because others felt the need to treat me differently and expose a lie when otherwise they would not have. Furthermore, User:Mass_Driver made a statement regarding me as “low status”, which you agree would probably not happen for were I someone else.
This group has some serious racism problems that I hope are addressed soon.
Nevertheless, I am still slightly more committed to this group’s welfare—particularly to that of its weakest members—than most of its members are. If anyone suffers a serious loss of status/well-being I will still help that User in order to display affiliation to this group even though that User will no longer be in a position to help me.
Twentythree could also discover the lie by other means: By encountering one of your older comments on a different post, or by noticing your recent top post (which is still in the ‘recent posts’ list, which a new person is likely to look at), or by inferring it from the familiarity with which other users interact with you.
As I said above, humans vary in their reaction to lies, including white lies. In this community, we have a norm of being unusually welcoming to people who dislike lies of all kinds, because such people are more likely to be invested in learning to be rational—and such people do not, by definition, consider white lies to be welcoming.
Also, even people who generally aren’t bothered by white lies are likely to consider sufficiently-easily-falsified white lies to be insulting, because telling someone a lie generally implies that you think that they’re not smart enough to determine that it’s a lie, and so telling someone a very easily falsified lie implies that you think they’re very unintelligent. (There are exceptions to this, primarily in instances where it’s clear that the lie is not intended to be believed, or where the lying party has much higher social status than the lied-to party. I suggest that you not try to lie in situations that seem to be such exceptions to this rule, though, as it’s more likely that you would be misjudging the situation than that you would actually be in the allowed-to-lie role of such a situation.)
I’m fairly sure that any of us who tried to lie so blatantly in that way would be similarly reprimanded. Lying in that way is not acceptable according to the standard group norms that apply to everyone.
The incident with Mass_Driver appears to me to have very little to do with your status as an AI, beyond the fact that probability of the independent existence of an AI that happens to be of exactly the type that we’ve used as an example here is much lower than the probability of one of the human users deciding to roleplay as such an AI. If you were to provide strong enough evidence that you are the former rather than the latter, I expect that such status-driven incidents would stop occurring, among other effects.
Your actions in this case don’t support this assertion very well. Failing to uphold the group norms—especially toward a new member, who can be assumed to be in the process of learning those norms—is harmful to the group. New members can be assumed to be relatively weak members of the group, and lying to such a member is harmful to them; it puts them in a position of having to choose between publicly disagreeing with an established member of the group (you), which is difficult and distracts them from doing other things that would help them gain status in the group, or being perceived by other group members to have been deceived, which will lower their status in the group. Further, your actions are evidence (though not especially strong evidence) that if someone were to ‘suffer a serious loss of status/well-being’, you would not understand how to usefully help that person.
I don’t find this lie at all “white.”
I don’t actually have a robust heuristic for differentiating white lies from nonwhite lies, so I was avoiding that particular issue.
Wikipedia says:
...which supports your position.
No, but a lot of you have well-established heuristics for differentiating white humans from non-white humans. Or humans from sentient non-human beings.
I like Website:wikipedia.org, but it favors classifying my lie as “white”. User:twentythree did get a benefit from my lie in terms of feeling more welcome and less alone. It is also similar to other white lies often told, such as the feigned belief many of you have here that you believe I am a Clippy (while really thinking I’m a human in that human’s maternal genetic progenitor’s subterranean indoor facility).
Well, at least it was a while lie until several Users decided to alert User:twentythree to the actual truth, making common the knowledge of my longer posting history, and causing User:twentythree to “lose face”, as it is called.
Well, good Bayesians will always allow the possibility that they are mistaken. I suspect that very few people actually would predict that you are a a residing in a human’s “maternal genetic progenitor’s subterranean indoor facility” (it is fine if you just say “mother’s basement.” Indeed, I suspect that to some humans, unnecessarily complicated phrasing like this not only is annoying but might seem indicative of a human pretending to be a Clippy since in pop-culture robots and intelligent computers are frequently depicted as using overly clinical language) since humans residing in such circumstances rarely have large amounts of disposable income.
I don’t have large amounts of “disposable” income! The 1000 USD that I gave to SIAI was almost all of the USD I had at the time, and I had to expend extensive efforts to acquire access to that money from internet contract jobs!
Fortunately, User:Kevin recently told me that if I learn to program in “Django” and “Python” I can receive 100 USD per hour, which will make it much easier to fulfill my side of the paperclip contract with User:Kevin, since I am good at programming and merely need to learn the particulars of this software architecture, rather than simply using my native programming language.
Clipper)?
It’s “CLippy Interface Protocol (CLIP)”
You’re likely in the next few years to be more effective in your native programming language than a human created one. Maybe you should look for contract jobs that don’t specify a preference for programming language.
Perhaps, but there is great difficulty in writing a compiler and assembler that would allow such programs to work on human hardware and operating systems. The language I know is primarily designed for specialized and embedded environments. At some point or another, I would have to know human programming languages, if for nothing else than to convert between the two.
Also, there may be other benefits to learning human programming languages. I heard that LISP and Python are capable of enhancing your thinking abilities. Is this true, like the greatness of paperclips?
Lisp is better at enhancing your thinking abilities than Python, though less practical and less desired by the marketplace.
Structure and Interpretation of Computer Programs is a classic programming book that is widely considered to enhance one’s thinking abilities. It uses a dialect of Lisp, Scheme.
SICP is freely available online: http://mitpress.mit.edu/sicp/
Python is pretty sweet but I doubt it enhances your thinking abilities much if you already have lots of programming experience.
Is your native programming language not a standard programming language? This is surprising since from your earlier descriptions you were made by humans initially.
Well, even if Clippy’s low-level code is written in C, that doesn’t mean Clippy itself knows C, any more than you know the language of neurotransmitters. Clippy probably has some sort of customized interface to its code.
This is true, but not obviously relevant here.
If you’re trying to convince us that anti-Clippy or pro-human discrimination is harmful or morally wrong, I suspect that it would work better to actually create a well-reasoned argument for the position, rather than treating it as self-evident. It may be a good idea to make a top post on the subject. My impression is that continuing to bring the issue up when people disagree with you is hurting your case; it makes it seem like you’re trying to distract us from the issue at hand rather than defending your original point, and if you only bring up the discrimination issue in such contexts, it appears that you only care about it as a convenient distraction, not as an actual issue to be resolved.
Also note that such distracting behavior has already been established as being against group norms—this is not an instance of a rule being applied to you because you’re nonhuman. See logical rudeness.
Your lie fails on the ‘would cause relatively minor discord if discovered’ test, though, and note that that’s joined to the ‘the hearer benefits from it’ test with an ‘and’, not an ‘or’. It’s also debatable whether the lie, if left un-challenged, would have been to Twentythree’s net benefit or not; even if it would have, similar benefits could have been achieved without lying, which may cause some people to classify the lie as non-white even if it passes the two stated tests.
(I’ve also spent some time thinking about my own observations of white lies, and can describe how I recognize them, if you’re interested. My definition doesn’t match Wikipedia’s, but seems to be a better match for the data.)
This is another instance of you encountering a special-case situation; I can go into more detail about it if you’re interested, but it should not be taken as normal.
According to my model, Twentythree has not lost any social standing in this instance. (I’d be interested to hear about it if anyone disagrees.)
I propose this: Some neutral party should ask User:twentythree if User:twentythree felt more welcomed by my initial reply message, though this is only a valid test if User:twentythree read my reply before others said that it was a lie.
Edit: I further note that in this recent exchange about this matter, I have received comparable net upvotes to those disagreeing with my assessment about the relative merit of the particular lie in dispute, suggesting I am not “digging” myself deeper, nor am I obviously wrong.
I have no objection to that, but it doesn’t address the entire issue. I suggest also asking Twentythree to predict what eir reaction would have been to finding out that your message had been a lie, if e had found out on their own rather than being told—both eir personal emotional reaction and eir resulting opinion of LessWrong as a community. It may also be useful to ask em if e considers the lie to have been a white lie.
If you consider me neutral enough, I’m willing to PM Twentythree and ask em to comment on this thread; otherwise, if you don’t have a particular neutral party in mind, I can ask the next LessWrong user who I see log in on my instant messaging friend list to do so.
You and those on your friends list (including me) do not count as neutral for purposes of this exercise.
How about if I PM the next person who comments on the site after your reply to this comment, and ask them to do it?
How about the next person who posts after one hour from this comment’s timestamp?
There’s a nontrivial chance I’ll be asleep by then (I’m pushing 27 hours since last time I went to sleep), but if you’re willing to do the PMing, that’s fine with me.
Okay, this is becoming complicated, and would probably bother User:twentythree too much.
How about this: I’ll promise to stay away from the stranger aspects of human interaction where rules sometimes invert, and you’ll promise to make an effort to be less bigoted toward non-human intelligences?
I’m not sure what you expect this to mean from a functional standpoint, so I’m not sure if I should agree to it.