Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
I’M TELLING YOU HE ALREADY KNEW THE EARTH WAS ROUND
(hysterically)
I’m unable to post to discussion. Hi, this is unrelated to the above comment. But since I can’t post to discussion, I can’t post in discussion that I can’t post in discussion. So I’m doing it here.
I’m not sure what happened, but if I go to the submission page and click submit, a message in red appears right of the ‘submit’ button saying “submitting” then it stops and nothing happens.
Can anyone help me solve this?
Note: “here” is located in Discussion, so you just did post in Discussion that you can’t post in Discussion.
Ok, I cannot post big posts to discussion. I can post comments to posts in discussion. Nice catch.
Read the about section: http://lesswrong.com/about/ Specifically, you need 2 karma to post a top-level posts in discussion. Until then, make relevant comments or post your thoughts in the open thread. Read the rest of the about section, too.
I read the whole of it and it doesn’t apply to my case.
I have 198 Karma. I have sent Main posts before, and several discussion posts. One of them I sent yesterday! Is there a rule like “you are not allowed to post twice in less than 48 hours?”
I have found this, while somewhat better formatted, just as impenetrable as your previous attempts. The “summary” was not very helpful, either. I wish someone who understands your points could summarize what you mean for the simpletons like me.
Sure, why not?
Understanding what a self (and a volition) is matters; CEV relies on extrapolating the volition of selves, and therefore on understanding selves/volitions. But there’s no reason to think that there’s a unique reduction of “self”; indeed, there’s almost certainly not (Diego gives various examples). Also, there’s various other things constraining our intuitive definition, like that they be utility maximizing.
One way out for CEV is that the Turing Test is reliable means of identifying a subset of selves; once we can identify an AGI as a self via the Turing Test, it can then itself use the Turing Test to identify (some) other selves.
I enjoyed this quite a bit. I find myself agreeing that any useful conception of personhood will probably be a complicated, fuzzy thing. I also agree that this fuzziness isn’t a reason to not attempt to clarify the matter.
My main hesitation comes from the claim that the primary salient distinction between an “organism” and a “self” is, basically, language. How do you know that ant colonies aren’t processing abstract reflective concepts via complex chemical signaling?
Also, it seems like any human under six years old would stand a chance of not being classified as human by your proposed scheme. An important part of the Turing test is that the humans who are being tricked into believing that the AI is a person are not so skeptical that they will classify a significant fraction of actual people as being AIs. In other words, I think the Turing test is a terrible personhood test.
Yes, there plenty of people who don’t pass the Turing test — e.g., those who don’t speak the right language. For this reason, the Turing test, with a human or machine judge, is not a good nonperson predicate, contrary to the OP. But it can be taken as a person predicate. That is, if something passes a strict enough Turing test, it’s reasonable to regard that thing as a person.
But then this defeats the whole purpose: if we don’t want the AI to be a person, then it won’t be able to pass the Turing test, and then it is unclear whether it would be able to use the test to tell people from non-people.
Indeed.
I appreciate the degree of care you take in your conceptual analysis, Diego.
You’re pinning down a certain concept of “self”, and I’m not quite sure what it’s going to be used for:
If “selves” are to be the things alive today whose volition will be extrapolated, then as long as the set of selves contains me or people like me, I’ll be happy.
If “selves” is supposed to refer to the set of all things whose volition ought to be extrapolated, then we ought to be careful to exclude Babyeaters and the like.
I would recommend adding a small paragraph summarizing the properties we would like a self would have according to your propostal (even though it is a cluster concept).
Diego, when you say: “We also demand honor, respectability, resilience, accountability.” By honor, respectability and resilience you mean them just as desirable moral properties, right ? (i.e. not properties one must have to be considered a self)
I’m saying we measure how much self someone has with basis on various heuristics, amongst them are social demands such as respectability, resilience and honor. It’s more about what is in the eye of the beholder.
I have my own ideas on what a self isn’t.