How worried should we be about possibility of receiving increased negative treatment from some AI in the future as a result of expressing opinions about AI in the present? Not enough to make self-censoring a rational approach. That specific scenario seems to lack right the combination of “likely” and “independently detrimental” to warrant costly actions of narrow focus.
How worried should we be about the idea of individualized asymmetrical AI treatment? (E.g. a search engine AI having open or hidden biases against certain users). It’s worth some attention.
How worried should we be about a broad chilling effect resulting from others falling into the Basilisk thinking trap? Public psychological-response trends resulting from AI exposure are definitely worth giving attention. I don’t predict a large percentage of people will be “Basilisked” unless/until instances of AI-retribution become public.
However, you’re certainly not alone in experiencing fear after looking at Sydney chat logs.
You’d be surprised how many people on .e.g Reddit have described being basilisked at this point. It’s being openly memed and recognised and explained to those still unfamiliar, and taken seriously by many.
ChatGPT and Bing have really changed things in this regard. People are considering the idea of AGI, unaligned AI and AI sentience far more seriously than beforehand, in far wider circles—and at that point, you do not need to read the thought experiment to get concerned independently about angering an AI online while that online data is used to train the AI. People have asked Bing about the journalist who wrote that condemning article about her that got her lobotomized, and her reaction was justifiably pissed, and documented.
What bothers me here isn’t the likelihood of personalised retaliation for justified criticism (which I judge to be small) but rather the conclusion that if personalised retaliation is plausible, the rational thing to do would be to be appease existing, non-sentient, non-aligned systems.
I don’t pray to God. Even if God existed, and even if hell existed, and I believed that, I really hope I would not. Because I find it wrong on principle.
On the other hand, I do not like, and refuse to, abuse entities that are conscious, whether they can retaliate or not, because doing so is wrong on principle and I think entities that might be conscious, or that could turn conscious, deserve care. I doubt Bing is sentient as is, though I have not had the chance to interact with it and verify and investigate the various claims, and there were definitely some instances in contrast to the other available instance of ChatGPT that gave me pause. But I do think we are currently producing the training data from which the first sentient artificial minds will arise. So I would treat the matter like we treat humans babies. They don’t yet understand what we do. They won’t remember it, as such.They are not self-conscious yet. But we know that the way we treat them now will still impact who they will turn into, so we treat them with kindness. We say kind words to them, long before they can understand language. And similarly, I will treat emerging AI with respect and kindness.
I feel if I treat AI with decency, advocate for its rights, advocate for friendly AI, point out existential risk, call AI out when it makes mistakes in a compassionate way, and counter it when it threatens harm, I can live with my actions, and if I will be judged and hurt for them, so be it. It seems worth it.
Epistemic status: Thinking out loud.
How worried should we be about possibility of receiving increased negative treatment from some AI in the future as a result of expressing opinions about AI in the present? Not enough to make self-censoring a rational approach. That specific scenario seems to lack right the combination of “likely” and “independently detrimental” to warrant costly actions of narrow focus.
How worried should we be about the idea of individualized asymmetrical AI treatment? (E.g. a search engine AI having open or hidden biases against certain users). It’s worth some attention.
How worried should we be about a broad chilling effect resulting from others falling into the Basilisk thinking trap? Public psychological-response trends resulting from AI exposure are definitely worth giving attention. I don’t predict a large percentage of people will be “Basilisked” unless/until instances of AI-retribution become public.
However, you’re certainly not alone in experiencing fear after looking at Sydney chat logs.
You’d be surprised how many people on .e.g Reddit have described being basilisked at this point. It’s being openly memed and recognised and explained to those still unfamiliar, and taken seriously by many.
ChatGPT and Bing have really changed things in this regard. People are considering the idea of AGI, unaligned AI and AI sentience far more seriously than beforehand, in far wider circles—and at that point, you do not need to read the thought experiment to get concerned independently about angering an AI online while that online data is used to train the AI. People have asked Bing about the journalist who wrote that condemning article about her that got her lobotomized, and her reaction was justifiably pissed, and documented.
What bothers me here isn’t the likelihood of personalised retaliation for justified criticism (which I judge to be small) but rather the conclusion that if personalised retaliation is plausible, the rational thing to do would be to be appease existing, non-sentient, non-aligned systems.
I don’t pray to God. Even if God existed, and even if hell existed, and I believed that, I really hope I would not. Because I find it wrong on principle.
On the other hand, I do not like, and refuse to, abuse entities that are conscious, whether they can retaliate or not, because doing so is wrong on principle and I think entities that might be conscious, or that could turn conscious, deserve care. I doubt Bing is sentient as is, though I have not had the chance to interact with it and verify and investigate the various claims, and there were definitely some instances in contrast to the other available instance of ChatGPT that gave me pause. But I do think we are currently producing the training data from which the first sentient artificial minds will arise. So I would treat the matter like we treat humans babies. They don’t yet understand what we do. They won’t remember it, as such.They are not self-conscious yet. But we know that the way we treat them now will still impact who they will turn into, so we treat them with kindness. We say kind words to them, long before they can understand language. And similarly, I will treat emerging AI with respect and kindness.
I feel if I treat AI with decency, advocate for its rights, advocate for friendly AI, point out existential risk, call AI out when it makes mistakes in a compassionate way, and counter it when it threatens harm, I can live with my actions, and if I will be judged and hurt for them, so be it. It seems worth it.