Yes, I think humans are hard to model as CSAs (because they don’t cleanly cut “is” from ought), but my other problem with it is that, AFAICT anything can be equivalently expressed as a CSA, so I want to know an example of a system, preferably intelligent, that is not a CSA so I know what I’m differentiating it from.
Yes, I think humans are hard to model as CSAs (because they don’t cleanly cut “is” from ought), but my other problem with it is that, AFAICT anything can be equivalently expressed as a CSA, so I want to know an example of a system, preferably intelligent, that is not a CSA so I know what I’m differentiating it from.