I think there are two main problems being pointed at here. First is that it seems reasonable to say that for the most part under most definitions of intelligence, intelligence is largely continuous (though some would argue for the existence of a few specific discontinuities)---thus, it seems unreasonable to ask “is X intelligent”. A teacup may be slightly more intelligent than a rock, and far less intelligent than GPT-3.
Second is the fact that the thing we actually care about is neither “intelligence” nor “Bayesian agent”; just because you can’t name something very precisely yet doesn’t mean that thing doesn’t exist or isn’t worth thinking about. The thing we care about is that someone might make a thing in 10 years that literally kills everyone, and we have some models of how we might expect that thing to be built. In analogy, perhaps we have a big philosophical argument over what counts as a “chair”—some would argue bitterly whether stools count as chairs, or whether tiny microscopic chair-shaped things count as chairs, or whether rocks count as chairs because you can sit on them, some people arguing that there is in fact no such thing as a physical chair, because concepts like that exist only in the map and the territory is made of atoms, etc. But if you have the problem that you expect chairs to break when you sit on them if they aren’t structurally sound, then most of these arguments are a huge distraction. Or more pithily:
“nooo that’s not really intelligent” I continue to insist as I shrink and transform into a paperclip
The other part is that humans can pursue pretty arbitrary instrumental goals, whereas if you tell the teacup it has to win a chess match or die it will die.
“Pretty arbitrary” of course not meaning “absolutely arbitrary”, just meaning more arbitrary than most things, such as teacups. And when I said “tell” I give an ultimatum and then follow through.
I think there are two main problems being pointed at here. First is that it seems reasonable to say that for the most part under most definitions of intelligence, intelligence is largely continuous (though some would argue for the existence of a few specific discontinuities)---thus, it seems unreasonable to ask “is X intelligent”. A teacup may be slightly more intelligent than a rock, and far less intelligent than GPT-3.
Second is the fact that the thing we actually care about is neither “intelligence” nor “Bayesian agent”; just because you can’t name something very precisely yet doesn’t mean that thing doesn’t exist or isn’t worth thinking about. The thing we care about is that someone might make a thing in 10 years that literally kills everyone, and we have some models of how we might expect that thing to be built. In analogy, perhaps we have a big philosophical argument over what counts as a “chair”—some would argue bitterly whether stools count as chairs, or whether tiny microscopic chair-shaped things count as chairs, or whether rocks count as chairs because you can sit on them, some people arguing that there is in fact no such thing as a physical chair, because concepts like that exist only in the map and the territory is made of atoms, etc. But if you have the problem that you expect chairs to break when you sit on them if they aren’t structurally sound, then most of these arguments are a huge distraction. Or more pithily:
The other part is that humans can pursue pretty arbitrary instrumental goals, whereas if you tell the teacup it has to win a chess match or die it will die.
No, they can’t. See: “akrasia” on the path to protecting their hypothetical predicted future selves 30 years from now.
The teacup takes the W here too. It’s indifferent to blackmail! [chad picture]
“Pretty arbitrary” of course not meaning “absolutely arbitrary”, just meaning more arbitrary than most things, such as teacups. And when I said “tell” I give an ultimatum and then follow through.
Fair.
Something something blackmailer is subjunctively dependent with the teacup! (This is a joke.)