Toby, I’m not sure that I understand what you want me to do.
Especially as the main reason I don’t dabble in mainstream philosophy is that I consider it too vague for AI purposes. For example, in classical causal decision theory, there’s abstruse math done with a function p(x||y) (if I recall the notation correctly) that one is never told how to compute—it’s taken as a primitive. Judea Pearl could have told them, but nobody seems to have felt the need to develop the theory further, since they already had what looked to them like math: lots of neat symbols. This kind of “precision” does not impress me.
In general, I am skeptical of dressing up ideas in math that don’t deserve the status of math; I consider it academic status-seeking, and I try not to lay claim to such status when I don’t feel I’ve earned it. But if you can say specifically where you’re looking for precision, I can try to respond.
Toby, I’m not sure that I understand what you want me to do.
Especially as the main reason I don’t dabble in mainstream philosophy is that I consider it too vague for AI purposes. For example, in classical causal decision theory, there’s abstruse math done with a function p(x||y) (if I recall the notation correctly) that one is never told how to compute—it’s taken as a primitive. Judea Pearl could have told them, but nobody seems to have felt the need to develop the theory further, since they already had what looked to them like math: lots of neat symbols. This kind of “precision” does not impress me.
In general, I am skeptical of dressing up ideas in math that don’t deserve the status of math; I consider it academic status-seeking, and I try not to lay claim to such status when I don’t feel I’ve earned it. But if you can say specifically where you’re looking for precision, I can try to respond.