You just have to explain what you mean by “good” or “bad”’. They’re very vague terms. Often they mean “things I like” or “things I and those I like like” . “Those I like” could be anywhere between one other person I like a little, and every thing that can think even a little getting equal weight to myself. People mean all of those things by “good”. You can guess by context what someone might mean, but if you want to have a clear discussion, it’s best to specify.
As for whether those other information-processing systems (like LLMs and bugs) really have opinions about what is good and bad for them in the same rich way humans seem to, that is a separate question.
I think precisely defining “good” and “bad” is a bit beside the point—it’s a theory about how people come to believe things are good and bad, and we’re perfectly capable of having vague beliefs about goodness and badness. That said, the theory is lacking a precise account of what kind of beliefs it is meant to explain.
The LLM section isn’t meant as support for the theory, but speculation about what it would say about the status of “experiences” that language models can have. Compared to my pre-existing notions, the theory seems quite willing to accommodate LLMs having good and bad experiences on par with those that people have.
You just have to explain what you mean by “good” or “bad”’. They’re very vague terms. Often they mean “things I like” or “things I and those I like like” . “Those I like” could be anywhere between one other person I like a little, and every thing that can think even a little getting equal weight to myself. People mean all of those things by “good”. You can guess by context what someone might mean, but if you want to have a clear discussion, it’s best to specify.
As for whether those other information-processing systems (like LLMs and bugs) really have opinions about what is good and bad for them in the same rich way humans seem to, that is a separate question.
I think precisely defining “good” and “bad” is a bit beside the point—it’s a theory about how people come to believe things are good and bad, and we’re perfectly capable of having vague beliefs about goodness and badness. That said, the theory is lacking a precise account of what kind of beliefs it is meant to explain.
The LLM section isn’t meant as support for the theory, but speculation about what it would say about the status of “experiences” that language models can have. Compared to my pre-existing notions, the theory seems quite willing to accommodate LLMs having good and bad experiences on par with those that people have.