I got minimal value from the article as written, but I’m hoping that a steel-man version might be useful. In that spirit, I can grant a narrower claim: Smart people have more capability to fool us, all other things equal. Why? Because increased intelligence brings increased capability for deception.
This is as close to a tautology as I’ve seen in a long time. What predictive benefit comes from tautologies? I can’t think of any.
But why focus on capability? Probability of harm is a better metric.
Now, with that in mind, one should not assume a straight line between capability and probability of harm. One should look at all potential causal factors.
More broadly, the “all other things equal part” is problematic here. I will try to write more on this topic when I have time. My thoughts are not fleshed out yet, but I think my unease has to do with how ceteris paribus imposes constraints on a system. The claim I want to examine would go something like this: those constraints “bind” the system in ways that prevent proper observation and analysis.
Regarding “all things being equal” / ceteris paribus, I think you are correct (assuming I’m interpreting this last bullet-point as intended) in that it “binds” a system in ways that “divorce it from reality” to some extent.
I feel like this is a given, but also that since the concept exists on a “spectrum of isolation”, the ones that are closer to the edge of “impossible to separate” necessarily skew/divorce reality further.
I’m not sure if I’ve ever explicitly thought about that feature of this cognitive device— and it’s worth explicitly thinking about! (You might be meaning something else, but this is what I got out of it.)
As for this overall article, it is [what I find to be] humorous satire, so it’s more anti-value than value, if you will.
It pokes fun at the idea that we should fear[1] intelligence— which seems to be an overarching theme to many of the “AI safety” posts on LessWrong, and which I find highly ironic and humorous, as so many people here seem to feel (and no few number literally express) that they are more intelligent than the average person (some say it is “society” expressing it, versus themselves, per se— but still).
Thus, to some extent, this “intelligence is dangerous” sentiment is a bit of ego puffery as well…
But to address the rest of your comment, it’s cool that you keyed into the “probably dangerous” title element, as yes, it’s not just how bad a thing could be, but how likely the thing is to happen, which we use to assess risks to determine if they are “worth” taking.
Does increased intelligence bring increased capability for deception?
It is so hard to separate things! (To hark back a little, lol)
I can’t help but think there is a strange relationship here— take Mutually Assured Destruction for instance— at some point, the capability is so high it appears to limit not only the probability— but the capability itself!
I think I will end here, as the M.A.D. angle has me pondering semantics and whatnot… but thanks for the impetus to post!
I got minimal value from the article as written, but I’m hoping that a steel-man version might be useful. In that spirit, I can grant a narrower claim: Smart people have more capability to fool us, all other things equal. Why? Because increased intelligence brings increased capability for deception.
This is as close to a tautology as I’ve seen in a long time. What predictive benefit comes from tautologies? I can’t think of any.
But why focus on capability? Probability of harm is a better metric.
Now, with that in mind, one should not assume a straight line between capability and probability of harm. One should look at all potential causal factors.
More broadly, the “all other things equal part” is problematic here. I will try to write more on this topic when I have time. My thoughts are not fleshed out yet, but I think my unease has to do with how ceteris paribus imposes constraints on a system. The claim I want to examine would go something like this: those constraints “bind” the system in ways that prevent proper observation and analysis.
Regarding “all things being equal” / ceteris paribus, I think you are correct (assuming I’m interpreting this last bullet-point as intended) in that it “binds” a system in ways that “divorce it from reality” to some extent.
I feel like this is a given, but also that since the concept exists on a “spectrum of isolation”, the ones that are closer to the edge of “impossible to separate” necessarily skew/divorce reality further.
I’m not sure if I’ve ever explicitly thought about that feature of this cognitive device— and it’s worth explicitly thinking about! (You might be meaning something else, but this is what I got out of it.)
As for this overall article, it is [what I find to be] humorous satire, so it’s more anti-value than value, if you will.
It pokes fun at the idea that we should fear[1] intelligence— which seems to be an overarching theme to many of the “AI safety” posts on LessWrong, and which I find highly ironic and humorous, as so many people here seem to feel (and no few number literally express) that they are more intelligent than the average person (some say it is “society” expressing it, versus themselves, per se— but still).
Thus, to some extent, this “intelligence is dangerous” sentiment is a bit of ego puffery as well…
But to address the rest of your comment, it’s cool that you keyed into the “probably dangerous” title element, as yes, it’s not just how bad a thing could be, but how likely the thing is to happen, which we use to assess risks to determine if they are “worth” taking.
Does increased intelligence bring increased capability for deception?
It is so hard to separate things! (To hark back a little, lol)
I can’t help but think there is a strange relationship here— take Mutually Assured Destruction for instance— at some point, the capability is so high it appears to limit not only the probability— but the capability itself!
I think I will end here, as the M.A.D. angle has me pondering semantics and whatnot… but thanks for the impetus to post!
whatever terminology you prefer that conveys “intelligence” as a pejorative