“Prescriptive” seems like it can be split further. There’s “what is the best thing to do in general with limited resources”, i.e., “how to write an AI”—this is close to normative but not quite the same thing—and then there’s “what specifically a human should do to compensate for biases”. Which is meant by “prescriptive” in the book? The description above doesn’t make it clear. We should have terms for both.
I’m not quite sure if I agree that that split is valuable. A lot of the prescriptive recommendations I know try to replace parts of decision-making entirely, which is different from bias-compensation, but building from scratch is very different from adapting a currently working system. I’ll have to chew on that for a while (but feel free to put forth some implications of having such a split).
(For example, one thing I’m considering is that “limited resources” implies multiple limits to me- the decision-making system I would prescribe for myself and the one I would prescribe for an IQ 70 person are different. If I’m comfortable calling both of those “prescriptive,” do I really need another word for what I’d tell an AI to do?)
“Prescriptive” seems like it can be split further. There’s “what is the best thing to do in general with limited resources”, i.e., “how to write an AI”—this is close to normative but not quite the same thing—and then there’s “what specifically a human should do to compensate for biases”. Which is meant by “prescriptive” in the book? The description above doesn’t make it clear. We should have terms for both.
The book is focused on humans.
I’m not quite sure if I agree that that split is valuable. A lot of the prescriptive recommendations I know try to replace parts of decision-making entirely, which is different from bias-compensation, but building from scratch is very different from adapting a currently working system. I’ll have to chew on that for a while (but feel free to put forth some implications of having such a split).
(For example, one thing I’m considering is that “limited resources” implies multiple limits to me- the decision-making system I would prescribe for myself and the one I would prescribe for an IQ 70 person are different. If I’m comfortable calling both of those “prescriptive,” do I really need another word for what I’d tell an AI to do?)