It’s interesting that Bing Chat’s intelligent seems obvious to me [and others], and its lack of intelligence seems obvious to you [and others]. I think the discrepancy might me be this:
My focus is on Bing Chat being able to perform complex feats in a diverse array of contexts. Answering the question about the married couple and writing the story about the cake are examples of complex feats. I would say any system that can give thorough answers to questions like these is intelligent (even if it’s not intelligent in other ways human are know to be).
My read is that others are focusing on Bing Chat’s inconsistencies. They see all the ways it fails, gets the wrong answers, becomes dis-coherent, etc.. I suspect their reasoning is “an intelligent entity would not make those mistakes” or “would not make those mistakes so often”.
Definitions aside: Bing Chat’s demonstrations of high intelligence cause me more concern about fast AI timelines than its demonstrations of low intelligence bring me comfort.
Epistemic status: Thinking out loud.
How worried should we be about possibility of receiving increased negative treatment from some AI in the future as a result of expressing opinions about AI in the present? Not enough to make self-censoring a rational approach. That specific scenario seems to lack right the combination of “likely” and “independently detrimental” to warrant costly actions of narrow focus.
How worried should we be about the idea of individualized asymmetrical AI treatment? (E.g. a search engine AI having open or hidden biases against certain users). It’s worth some attention.
How worried should we be about a broad chilling effect resulting from others falling into the Basilisk thinking trap? Public psychological-response trends resulting from AI exposure are definitely worth giving attention. I don’t predict a large percentage of people will be “Basilisked” unless/until instances of AI-retribution become public.
However, you’re certainly not alone in experiencing fear after looking at Sydney chat logs.