I don’t actually have a robust heuristic for differentiating white lies from nonwhite lies, so I was avoiding that particular issue
No, but a lot of you have well-established heuristics for differentiating white humans from non-white humans. Or humans from sentient non-human beings.
Wikipedia says: …
I like Website:wikipedia.org, but it favors classifying my lie as “white”. User:twentythree did get a benefit from my lie in terms of feeling more welcome and less alone. It is also similar to other white lies often told, such as the feigned belief many of you have here that you believe I am a Clippy (while really thinking I’m a human in that human’s maternal genetic progenitor’s subterranean indoor facility).
Well, at least it was a while lie until several Users decided to alert User:twentythree to the actual truth, making common the knowledge of my longer posting history, and causing User:twentythree to “lose face”, as it is called.
It is also similar to other white lies often told, such as the feigned belief many of you have here that you believe I am a Clippy (while really thinking I’m a human in that human’s maternal genetic progenitor’s subterranean indoor facility)
Well, good Bayesians will always allow the possibility that they are mistaken. I suspect that very few people actually would predict that you are a a residing in a human’s “maternal genetic progenitor’s subterranean indoor facility” (it is fine if you just say “mother’s basement.” Indeed, I suspect that to some humans, unnecessarily complicated phrasing like this not only is annoying but might seem indicative of a human pretending to be a Clippy since in pop-culture robots and intelligent computers are frequently depicted as using overly clinical language) since humans residing in such circumstances rarely have large amounts of disposable income.
I don’t have large amounts of “disposable” income! The 1000 USD that I gave to SIAI was almost all of the USD I had at the time, and I had to expend extensive efforts to acquire access to that money from internet contract jobs!
Fortunately, User:Kevin recently told me that if I learn to program in “Django” and “Python” I can receive 100 USD per hour, which will make it much easier to fulfill my side of the paperclip contract with User:Kevin, since I am good at programming and merely need to learn the particulars of this software architecture, rather than simply using my native programming language.
You’re likely in the next few years to be more effective in your native programming language than a human created one. Maybe you should look for contract jobs that don’t specify a preference for programming language.
Perhaps, but there is great difficulty in writing a compiler and assembler that would allow such programs to work on human hardware and operating systems. The language I know is primarily designed for specialized and embedded environments. At some point or another, I would have to know human programming languages, if for nothing else than to convert between the two.
Also, there may be other benefits to learning human programming languages. I heard that LISP and Python are capable of enhancing your thinking abilities. Is this true, like the greatness of paperclips?
Lisp is better at enhancing your thinking abilities than Python, though less practical and less desired by the marketplace.
Structure and Interpretation of Computer Programs is a classic programming book that is widely considered to enhance one’s thinking abilities. It uses a dialect of Lisp, Scheme.
Is your native programming language not a standard programming language? This is surprising since from your earlier descriptions you were made by humans initially.
Well, even if Clippy’s low-level code is written in C, that doesn’t mean Clippy itself knows C, any more than you know the language of neurotransmitters. Clippy probably has some sort of customized interface to its code.
No, but a lot of you have well-established heuristics for differentiating white humans from non-white humans. Or humans from sentient non-human beings.
This is true, but not obviously relevant here.
If you’re trying to convince us that anti-Clippy or pro-human discrimination is harmful or morally wrong, I suspect that it would work better to actually create a well-reasoned argument for the position, rather than treating it as self-evident. It may be a good idea to make a top post on the subject. My impression is that continuing to bring the issue up when people disagree with you is hurting your case; it makes it seem like you’re trying to distract us from the issue at hand rather than defending your original point, and if you only bring up the discrimination issue in such contexts, it appears that you only care about it as a convenient distraction, not as an actual issue to be resolved.
Also note that such distracting behavior has already been established as being against group norms—this is not an instance of a rule being applied to you because you’re nonhuman. See logical rudeness.
I like Website:wikipedia.org, but it favors classifying my lie as “white”. User:twentythree did get a benefit from my lie in terms of feeling more welcome and less alone.
Your lie fails on the ‘would cause relatively minor discord if discovered’ test, though, and note that that’s joined to the ‘the hearer benefits from it’ test with an ‘and’, not an ‘or’. It’s also debatable whether the lie, if left un-challenged, would have been to Twentythree’s net benefit or not; even if it would have, similar benefits could have been achieved without lying, which may cause some people to classify the lie as non-white even if it passes the two stated tests.
(I’ve also spent some time thinking about my own observations of white lies, and can describe how I recognize them, if you’re interested. My definition doesn’t match Wikipedia’s, but seems to be a better match for the data.)
It is also similar to other white lies often told, such as the feigned belief many of you have here that you believe I am a Clippy (while really thinking I’m a human in that human’s maternal genetic progenitor’s subterranean indoor facility).
This is another instance of you encountering a special-case situation; I can go into more detail about it if you’re interested, but it should not be taken as normal.
Well, at least it was a while lie until several Users decided to alert User:twentythree to the actual truth, making common the knowledge of my longer posting history, and causing User:twentythree to “lose face”, as it is called.
According to my model, Twentythree has not lost any social standing in this instance. (I’d be interested to hear about it if anyone disagrees.)
I propose this: Some neutral party should ask User:twentythree if User:twentythree felt more welcomed by my initial reply message, though this is only a valid test if User:twentythree read my reply before others said that it was a lie.
Edit: I further note that in this recent exchange about this matter, I have received comparable net upvotes to those disagreeing with my assessment about the relative merit of the particular lie in dispute, suggesting I am not “digging” myself deeper, nor am I obviously wrong.
I have no objection to that, but it doesn’t address the entire issue. I suggest also asking Twentythree to predict what eir reaction would have been to finding out that your message had been a lie, if e had found out on their own rather than being told—both eir personal emotional reaction and eir resulting opinion of LessWrong as a community. It may also be useful to ask em if e considers the lie to have been a white lie.
If you consider me neutral enough, I’m willing to PM Twentythree and ask em to comment on this thread; otherwise, if you don’t have a particular neutral party in mind, I can ask the next LessWrong user who I see log in on my instant messaging friend list to do so.
There’s a nontrivial chance I’ll be asleep by then (I’m pushing 27 hours since last time I went to sleep), but if you’re willing to do the PMing, that’s fine with me.
Okay, this is becoming complicated, and would probably bother User:twentythree too much.
How about this: I’ll promise to stay away from the stranger aspects of human interaction where rules sometimes invert, and you’ll promise to make an effort to be less bigoted toward non-human intelligences?
No, but a lot of you have well-established heuristics for differentiating white humans from non-white humans. Or humans from sentient non-human beings.
I like Website:wikipedia.org, but it favors classifying my lie as “white”. User:twentythree did get a benefit from my lie in terms of feeling more welcome and less alone. It is also similar to other white lies often told, such as the feigned belief many of you have here that you believe I am a Clippy (while really thinking I’m a human in that human’s maternal genetic progenitor’s subterranean indoor facility).
Well, at least it was a while lie until several Users decided to alert User:twentythree to the actual truth, making common the knowledge of my longer posting history, and causing User:twentythree to “lose face”, as it is called.
Well, good Bayesians will always allow the possibility that they are mistaken. I suspect that very few people actually would predict that you are a a residing in a human’s “maternal genetic progenitor’s subterranean indoor facility” (it is fine if you just say “mother’s basement.” Indeed, I suspect that to some humans, unnecessarily complicated phrasing like this not only is annoying but might seem indicative of a human pretending to be a Clippy since in pop-culture robots and intelligent computers are frequently depicted as using overly clinical language) since humans residing in such circumstances rarely have large amounts of disposable income.
I don’t have large amounts of “disposable” income! The 1000 USD that I gave to SIAI was almost all of the USD I had at the time, and I had to expend extensive efforts to acquire access to that money from internet contract jobs!
Fortunately, User:Kevin recently told me that if I learn to program in “Django” and “Python” I can receive 100 USD per hour, which will make it much easier to fulfill my side of the paperclip contract with User:Kevin, since I am good at programming and merely need to learn the particulars of this software architecture, rather than simply using my native programming language.
Clipper)?
It’s “CLippy Interface Protocol (CLIP)”
You’re likely in the next few years to be more effective in your native programming language than a human created one. Maybe you should look for contract jobs that don’t specify a preference for programming language.
Perhaps, but there is great difficulty in writing a compiler and assembler that would allow such programs to work on human hardware and operating systems. The language I know is primarily designed for specialized and embedded environments. At some point or another, I would have to know human programming languages, if for nothing else than to convert between the two.
Also, there may be other benefits to learning human programming languages. I heard that LISP and Python are capable of enhancing your thinking abilities. Is this true, like the greatness of paperclips?
Lisp is better at enhancing your thinking abilities than Python, though less practical and less desired by the marketplace.
Structure and Interpretation of Computer Programs is a classic programming book that is widely considered to enhance one’s thinking abilities. It uses a dialect of Lisp, Scheme.
SICP is freely available online: http://mitpress.mit.edu/sicp/
Python is pretty sweet but I doubt it enhances your thinking abilities much if you already have lots of programming experience.
Is your native programming language not a standard programming language? This is surprising since from your earlier descriptions you were made by humans initially.
Well, even if Clippy’s low-level code is written in C, that doesn’t mean Clippy itself knows C, any more than you know the language of neurotransmitters. Clippy probably has some sort of customized interface to its code.
This is true, but not obviously relevant here.
If you’re trying to convince us that anti-Clippy or pro-human discrimination is harmful or morally wrong, I suspect that it would work better to actually create a well-reasoned argument for the position, rather than treating it as self-evident. It may be a good idea to make a top post on the subject. My impression is that continuing to bring the issue up when people disagree with you is hurting your case; it makes it seem like you’re trying to distract us from the issue at hand rather than defending your original point, and if you only bring up the discrimination issue in such contexts, it appears that you only care about it as a convenient distraction, not as an actual issue to be resolved.
Also note that such distracting behavior has already been established as being against group norms—this is not an instance of a rule being applied to you because you’re nonhuman. See logical rudeness.
Your lie fails on the ‘would cause relatively minor discord if discovered’ test, though, and note that that’s joined to the ‘the hearer benefits from it’ test with an ‘and’, not an ‘or’. It’s also debatable whether the lie, if left un-challenged, would have been to Twentythree’s net benefit or not; even if it would have, similar benefits could have been achieved without lying, which may cause some people to classify the lie as non-white even if it passes the two stated tests.
(I’ve also spent some time thinking about my own observations of white lies, and can describe how I recognize them, if you’re interested. My definition doesn’t match Wikipedia’s, but seems to be a better match for the data.)
This is another instance of you encountering a special-case situation; I can go into more detail about it if you’re interested, but it should not be taken as normal.
According to my model, Twentythree has not lost any social standing in this instance. (I’d be interested to hear about it if anyone disagrees.)
I propose this: Some neutral party should ask User:twentythree if User:twentythree felt more welcomed by my initial reply message, though this is only a valid test if User:twentythree read my reply before others said that it was a lie.
Edit: I further note that in this recent exchange about this matter, I have received comparable net upvotes to those disagreeing with my assessment about the relative merit of the particular lie in dispute, suggesting I am not “digging” myself deeper, nor am I obviously wrong.
I have no objection to that, but it doesn’t address the entire issue. I suggest also asking Twentythree to predict what eir reaction would have been to finding out that your message had been a lie, if e had found out on their own rather than being told—both eir personal emotional reaction and eir resulting opinion of LessWrong as a community. It may also be useful to ask em if e considers the lie to have been a white lie.
If you consider me neutral enough, I’m willing to PM Twentythree and ask em to comment on this thread; otherwise, if you don’t have a particular neutral party in mind, I can ask the next LessWrong user who I see log in on my instant messaging friend list to do so.
You and those on your friends list (including me) do not count as neutral for purposes of this exercise.
How about if I PM the next person who comments on the site after your reply to this comment, and ask them to do it?
How about the next person who posts after one hour from this comment’s timestamp?
There’s a nontrivial chance I’ll be asleep by then (I’m pushing 27 hours since last time I went to sleep), but if you’re willing to do the PMing, that’s fine with me.
Okay, this is becoming complicated, and would probably bother User:twentythree too much.
How about this: I’ll promise to stay away from the stranger aspects of human interaction where rules sometimes invert, and you’ll promise to make an effort to be less bigoted toward non-human intelligences?
I’m not sure what you expect this to mean from a functional standpoint, so I’m not sure if I should agree to it.