Take note, Wei_Dai and everyone who uses could-should-agents as models of humans.
I agree with your point about the is/ought non-distinction. When you refer the CS Agents are you just emphasising the extent that humans diverge from that idealized model?
For my part I find the CSA model interesting don’t find CSAs a remotely useful way to model humans. But that is probably because ‘could and should’ are the easy part and I need other models to predict the ‘but probably will’ bit.
Yes, I think humans are hard to model as CSAs (because they don’t cleanly cut “is” from ought), but my other problem with it is that, AFAICT anything can be equivalently expressed as a CSA, so I want to know an example of a system, preferably intelligent, that is not a CSA so I know what I’m differentiating it from.
I agree with your point about the is/ought non-distinction. When you refer the CS Agents are you just emphasising the extent that humans diverge from that idealized model?
For my part I find the CSA model interesting don’t find CSAs a remotely useful way to model humans. But that is probably because ‘could and should’ are the easy part and I need other models to predict the ‘but probably will’ bit.
Yes, I think humans are hard to model as CSAs (because they don’t cleanly cut “is” from ought), but my other problem with it is that, AFAICT anything can be equivalently expressed as a CSA, so I want to know an example of a system, preferably intelligent, that is not a CSA so I know what I’m differentiating it from.