Which sounds like that fuzzily-defined “conscience” thing. So suppose I say that this “Stone tablet” is not a literal tablet, but is rather a set of rules that sufficiently advanced lifeforms will tend to accord to? Is this fundamentally different than the opposite side of the argument?
Ha! No. I guess I’m using a stricter definition of a “mind” than is used in that post: one that is able to model itself. I recognize the utility of such a generalized definition of intelligence, but I’m talking about a subclass of said intelligences.
Which sounds like that fuzzily-defined “conscience” thing. So suppose I say that this “Stone tablet” is not a literal tablet, but is rather a set of rules that sufficiently advanced lifeforms will tend to accord to? Is this fundamentally different than the opposite side of the argument?
Well, that depends. What does “sufficiently advanced” mean? Does this claim have anything to say about Clippy?
If it doesn’t constrain anticipation there, I suspect no difference exists.
Ha! No. I guess I’m using a stricter definition of a “mind” than is used in that post: one that is able to model itself. I recognize the utility of such a generalized definition of intelligence, but I’m talking about a subclass of said intelligences.
Er, why couldn’t Clippy model itself? Surely you don’t mean that you think Clippy would change its end-goals if it did so (for what reason?)
… Just to check: we’re talking about Microsoft Office’s Clippy, right?
Not likely.
Oh dear; how embarrassing. Let me try my argument again from the top, then.
Actually, this is what we’re really talking about, not MS Word constructs or LW roleplayers.