The instinctive status hierarchy treats factual beliefs in pretty much the same way as policy proposals.
Bingo. Like I’ve harped on and on about, humans don’t naturally decouple beliefs from values, or ought from is. If an ought (esp. involving distribution of resources ) hinges on an “is”, it’s too often the “is” that gets adjusted, self-servingly, rather than the ought.
Take note, Wei_Dai and everyone who uses could-should-agents as models of humans.
I agree with your point about the is/ought non-distinction. When you refer the CS Agents are you just emphasising the extent that humans diverge from that idealized model?
For my part I find the CSA model interesting don’t find CSAs a remotely useful way to model humans. But that is probably because ‘could and should’ are the easy part and I need other models to predict the ‘but probably will’ bit.
Yes, I think humans are hard to model as CSAs (because they don’t cleanly cut “is” from ought), but my other problem with it is that, AFAICT anything can be equivalently expressed as a CSA, so I want to know an example of a system, preferably intelligent, that is not a CSA so I know what I’m differentiating it from.
Bingo. Like I’ve harped on and on about, humans don’t naturally decouple beliefs from values, or ought from is. If an ought (esp. involving distribution of resources ) hinges on an “is”, it’s too often the “is” that gets adjusted, self-servingly, rather than the ought.
Take note, Wei_Dai and everyone who uses could-should-agents as models of humans.
I agree with your point about the is/ought non-distinction. When you refer the CS Agents are you just emphasising the extent that humans diverge from that idealized model?
For my part I find the CSA model interesting don’t find CSAs a remotely useful way to model humans. But that is probably because ‘could and should’ are the easy part and I need other models to predict the ‘but probably will’ bit.
Yes, I think humans are hard to model as CSAs (because they don’t cleanly cut “is” from ought), but my other problem with it is that, AFAICT anything can be equivalently expressed as a CSA, so I want to know an example of a system, preferably intelligent, that is not a CSA so I know what I’m differentiating it from.