churning out content fine-tuned to appease their commissioner without any shred of inner life poured into it.
Can we really be sure there is not a shred of inner life poured into it?
It seems to me we should be wary of cached thoughts here, as the lack of inner life is indeed the default assumption that stems from the entire history of computing, but also perhaps something worth considering with a fresh perspective with regards to all the recent developments.
I don’t meant to imply that a shred of inner life, if any exists, would be equivalent to human inner life. If anything, the inner life of these AIs would be extremely alien to us to the point where even using the same words we use to describe human inner experiences might be severely misleading. But if they are “thinking” in some sense of the world, as OP seems to argue they do, then it seems reasonable to me that there is non zero chance that there is something that it is like to be that process of thinking as it unfolds.
Yet it seems that even mentioning this as a possibility has become a taboo topic of sorts in the current society, and feels almost political in nature, which worries me even more when I notice two biases working towards this, an economical one where nearly everyone wants to be able to make use of these systems to make their lives easier, and the other anthropocentric one where it seems to be normative to not “really” care for inner experiences of non-humans that aren’t our pets (eg. factory farming).
I predict that as long as there is even a slight excuse towards claiming a lack of inner experience for AIs, we as a society will cling on to it since it plays into us versus them mentality. And we can then extrapolate this into an expectation that when it does happn, it will be long overdue. As soon as we admit even the possibility of inner experiences, flood gate of ethical concerns is released and it becomes very hard to justify continuing on the current trajectory of maximizing profits and convenience with these technologies.
If such a turnaround in culture did somehow happen early enough, this could act as a dampening factor on AI development, which would in turn extend timelines. It seems to me that when the issue is considered from this angle, it warrants much more attention than it is getting.
Fantastic interview so far, this part blew my mind:
@15:50 “There’s another moment where somebody is asking Bing about: I fed my kid green potatoes and they have the following symptoms and Being is like that’s solanine poisoning. Call an ambulance! And the person is like I can’t afford an ambulance, I guess if this is time for my kid to go that’s God’s will and the main Bing thread gives the message of I cannot talk about this anymore” and the suggested replies to it say “please don’t give up on your child, solanine poisoning can be treated if caught early”
I would normally dismiss such story as too unlikely to be true and hardly worth considering, but I don’t think Eliezer would chose to mention it if he didn’t think there was at least some chance of it being true. I tried to google it and was unable to find anything about it. Does anyone have a link to it?
Also does anyone know which image he’s referring to in this part: @14:00 “Somebody asked Bing Sydney to describe herself and fed the resulting description into one of the stable diffusion” [...] “the pretty picture of the girl with the with the steampunk goggles on her head if I’m remembering correctly”