Bing Chat is not intelligent. It doesn’t really have a character. (And whether one calls it GPT-4 or not, given the number of post-GPT changes doesn’t seem very meaningful.)
But to the extent that people think that one or more of the above things are true however, it will tend to increase skepticism of AI and support for taking more care in deploying it and for regulating it, all of which seem positive.
Oh come on! At some point we have to stop moving the goalposts any further. Bing Chat is absolutely intelligent, as basically all laypeople or researchers pre-2015 would tell you. We can quibble about the lack of causal understanding or whatever, but the adjective “intelligent” clearly describes it, unless you’re prepared to say a significant fraction of humans are not “intelligent” at writing.
This is an interesting question on which I’ve gone back and forth. I think ultimately, the inability to recognize blatant inconsistencies or to reason at all means that LLMs so far are not intelligent. (Or at least not more intelligent than a parrot.)
It’s interesting that Bing Chat’s intelligent seems obvious to me [and others], and its lack of intelligence seems obvious to you [and others]. I think the discrepancy might me be this:
My focus is on Bing Chat being able to perform complex feats in a diverse array of contexts. Answering the question about the married couple and writing the story about the cake are examples of complex feats. I would say any system that can give thorough answers to questions like these is intelligent (even if it’s not intelligent in other ways human are know to be).
My read is that others are focusing on Bing Chat’s inconsistencies. They see all the ways it fails, gets the wrong answers, becomes dis-coherent, etc.. I suspect their reasoning is “an intelligent entity would not make those mistakes” or “would not make those mistakes so often”.
Definitions aside: Bing Chat’s demonstrations of high intelligence cause me more concern about fast AI timelines than its demonstrations of low intelligence bring me comfort.
I think the issue here (about whether it is intelligent) is not so much a matter of the answers it fashions, but about whether it can be said it does so from an “I”. If not, it is basically a proverbial Chinese Room, though this merely moves the goalposts to the question whether humans are not, actually, also a Chinese Room, just a more sophisticated one. I suspect that we will not be very eager to accept such a finding, indeed, we may not be capable of seeing ourselves thus, for it implies a whole raft of rather unpleasant realities (like, say, the absence of free will, or indeed any will at all) which we’d not want to be true, to put it mildly.
I’ve met humans who are unable to recognize blatant inconsistencies. Not quite to the same extent as Bing Chat, but still. Also, I’m pretty sure monkeys are unable to recognize blatant inconsistencies, and monkeys are intelligent.
Bing Chat is not intelligent. It doesn’t really have a character. (And whether one calls it GPT-4 or not, given the number of post-GPT changes doesn’t seem very meaningful.)
But to the extent that people think that one or more of the above things are true however, it will tend to increase skepticism of AI and support for taking more care in deploying it and for regulating it, all of which seem positive.
Oh come on! At some point we have to stop moving the goalposts any further. Bing Chat is absolutely intelligent, as basically all laypeople or researchers pre-2015 would tell you. We can quibble about the lack of causal understanding or whatever, but the adjective “intelligent” clearly describes it, unless you’re prepared to say a significant fraction of humans are not “intelligent” at writing.
This is an interesting question on which I’ve gone back and forth. I think ultimately, the inability to recognize blatant inconsistencies or to reason at all means that LLMs so far are not intelligent. (Or at least not more intelligent than a parrot.)
It’s interesting that Bing Chat’s intelligent seems obvious to me [and others], and its lack of intelligence seems obvious to you [and others]. I think the discrepancy might me be this:
My focus is on Bing Chat being able to perform complex feats in a diverse array of contexts. Answering the question about the married couple and writing the story about the cake are examples of complex feats. I would say any system that can give thorough answers to questions like these is intelligent (even if it’s not intelligent in other ways human are know to be).
My read is that others are focusing on Bing Chat’s inconsistencies. They see all the ways it fails, gets the wrong answers, becomes dis-coherent, etc.. I suspect their reasoning is “an intelligent entity would not make those mistakes” or “would not make those mistakes so often”.
Definitions aside: Bing Chat’s demonstrations of high intelligence cause me more concern about fast AI timelines than its demonstrations of low intelligence bring me comfort.
I think the issue here (about whether it is intelligent) is not so much a matter of the answers it fashions, but about whether it can be said it does so from an “I”. If not, it is basically a proverbial Chinese Room, though this merely moves the goalposts to the question whether humans are not, actually, also a Chinese Room, just a more sophisticated one. I suspect that we will not be very eager to accept such a finding, indeed, we may not be capable of seeing ourselves thus, for it implies a whole raft of rather unpleasant realities (like, say, the absence of free will, or indeed any will at all) which we’d not want to be true, to put it mildly.
I’ve met humans who are unable to recognize blatant inconsistencies. Not quite to the same extent as Bing Chat, but still. Also, I’m pretty sure monkeys are unable to recognize blatant inconsistencies, and monkeys are intelligent.
What makes monkeys intelligent in your view?