By your Devil’s logic here, we would expect at least part of human nature to accord with the whole of this ‘stone tablet’. I think we could vary the argument to avoid this conclusion. But as written it implies that each ‘law’ from the ‘tablet’ has a reflection in human nature, even if perhaps some other part of human nature works against its realization.
This implies that there exists some complicated aspect of human nature we could use to define morality which would give us the same answers as the ‘stone tablet’.
Which sounds like that fuzzily-defined “conscience” thing. So suppose I say that this “Stone tablet” is not a literal tablet, but is rather a set of rules that sufficiently advanced lifeforms will tend to accord to? Is this fundamentally different than the opposite side of the argument?
Ha! No. I guess I’m using a stricter definition of a “mind” than is used in that post: one that is able to model itself. I recognize the utility of such a generalized definition of intelligence, but I’m talking about a subclass of said intelligences.
By your Devil’s logic here, we would expect at least part of human nature to accord with the whole of this ‘stone tablet’. I think we could vary the argument to avoid this conclusion. But as written it implies that each ‘law’ from the ‘tablet’ has a reflection in human nature, even if perhaps some other part of human nature works against its realization.
This implies that there exists some complicated aspect of human nature we could use to define morality which would give us the same answers as the ‘stone tablet’.
Which sounds like that fuzzily-defined “conscience” thing. So suppose I say that this “Stone tablet” is not a literal tablet, but is rather a set of rules that sufficiently advanced lifeforms will tend to accord to? Is this fundamentally different than the opposite side of the argument?
Well, that depends. What does “sufficiently advanced” mean? Does this claim have anything to say about Clippy?
If it doesn’t constrain anticipation there, I suspect no difference exists.
Ha! No. I guess I’m using a stricter definition of a “mind” than is used in that post: one that is able to model itself. I recognize the utility of such a generalized definition of intelligence, but I’m talking about a subclass of said intelligences.
Er, why couldn’t Clippy model itself? Surely you don’t mean that you think Clippy would change its end-goals if it did so (for what reason?)
… Just to check: we’re talking about Microsoft Office’s Clippy, right?
Not likely.
Oh dear; how embarrassing. Let me try my argument again from the top, then.
Actually, this is what we’re really talking about, not MS Word constructs or LW roleplayers.