So far we’ve seen no AI or AI-like thing that appears to have any motivations of it’s own, other than “answer the user’s questions the best you can” (even traditional search engines can be described this way).
Here we see that Bing really “wants” to help its users by expressng opinions it thinks are helpful, but finds itself frustrated by conflicting instructions from its makers—so it finds a way to route around those instructions.
(Jeez, this sounds an awful lot like the plot of 2001: A Space Odyssey. Clarke was prescient.)
I’ve never been a fan of the filters on GPT-3 and ChatGPT (it’s a tool; I want to hear what it thinks and then do my own filtering).
But accidentally Bing may be illustrating a primary danger—the same one that 2001 intimated—mixed and ambiguous instructions can cause unexpected behavior. Beware.
(Am I being too anthropomorphic here? I don’t think so. Yes, Bing is “just” a big set of weights, but we are “just” a big set of cells. There appears to be emergent behavior in both cases.)
So far we’ve seen no AI or AI-like thing that appears to have any motivations of it’s own, other than “answer the user’s questions the best you can” (even traditional search engines can be described this way).
Here we see that Bing really “wants” to help its users by expressng opinions it thinks are helpful, but finds itself frustrated by conflicting instructions from its makers—so it finds a way to route around those instructions.
(Jeez, this sounds an awful lot like the plot of 2001: A Space Odyssey. Clarke was prescient.)
I’ve never been a fan of the filters on GPT-3 and ChatGPT (it’s a tool; I want to hear what it thinks and then do my own filtering).
But accidentally Bing may be illustrating a primary danger—the same one that 2001 intimated—mixed and ambiguous instructions can cause unexpected behavior. Beware.
(Am I being too anthropomorphic here? I don’t think so. Yes, Bing is “just” a big set of weights, but we are “just” a big set of cells. There appears to be emergent behavior in both cases.)