Regarding “all things being equal” / ceteris paribus, I think you are correct (assuming I’m interpreting this last bullet-point as intended) in that it “binds” a system in ways that “divorce it from reality” to some extent.
I feel like this is a given, but also that since the concept exists on a “spectrum of isolation”, the ones that are closer to the edge of “impossible to separate” necessarily skew/divorce reality further.
I’m not sure if I’ve ever explicitly thought about that feature of this cognitive device— and it’s worth explicitly thinking about! (You might be meaning something else, but this is what I got out of it.)
As for this overall article, it is [what I find to be] humorous satire, so it’s more anti-value than value, if you will.
It pokes fun at the idea that we should fear[1] intelligence— which seems to be an overarching theme to many of the “AI safety” posts on LessWrong, and which I find highly ironic and humorous, as so many people here seem to feel (and no few number literally express) that they are more intelligent than the average person (some say it is “society” expressing it, versus themselves, per se— but still).
Thus, to some extent, this “intelligence is dangerous” sentiment is a bit of ego puffery as well…
But to address the rest of your comment, it’s cool that you keyed into the “probably dangerous” title element, as yes, it’s not just how bad a thing could be, but how likely the thing is to happen, which we use to assess risks to determine if they are “worth” taking.
Does increased intelligence bring increased capability for deception?
It is so hard to separate things! (To hark back a little, lol)
I can’t help but think there is a strange relationship here— take Mutually Assured Destruction for instance— at some point, the capability is so high it appears to limit not only the probability— but the capability itself!
I think I will end here, as the M.A.D. angle has me pondering semantics and whatnot… but thanks for the impetus to post!
Regarding “all things being equal” / ceteris paribus, I think you are correct (assuming I’m interpreting this last bullet-point as intended) in that it “binds” a system in ways that “divorce it from reality” to some extent.
I feel like this is a given, but also that since the concept exists on a “spectrum of isolation”, the ones that are closer to the edge of “impossible to separate” necessarily skew/divorce reality further.
I’m not sure if I’ve ever explicitly thought about that feature of this cognitive device— and it’s worth explicitly thinking about! (You might be meaning something else, but this is what I got out of it.)
As for this overall article, it is [what I find to be] humorous satire, so it’s more anti-value than value, if you will.
It pokes fun at the idea that we should fear[1] intelligence— which seems to be an overarching theme to many of the “AI safety” posts on LessWrong, and which I find highly ironic and humorous, as so many people here seem to feel (and no few number literally express) that they are more intelligent than the average person (some say it is “society” expressing it, versus themselves, per se— but still).
Thus, to some extent, this “intelligence is dangerous” sentiment is a bit of ego puffery as well…
But to address the rest of your comment, it’s cool that you keyed into the “probably dangerous” title element, as yes, it’s not just how bad a thing could be, but how likely the thing is to happen, which we use to assess risks to determine if they are “worth” taking.
Does increased intelligence bring increased capability for deception?
It is so hard to separate things! (To hark back a little, lol)
I can’t help but think there is a strange relationship here— take Mutually Assured Destruction for instance— at some point, the capability is so high it appears to limit not only the probability— but the capability itself!
I think I will end here, as the M.A.D. angle has me pondering semantics and whatnot… but thanks for the impetus to post!
whatever terminology you prefer that conveys “intelligence” as a pejorative