Decision theory with ordinals is actually well-studied and commonly used, specifically in language and grammar systems. See papers on Optimality Theory.
The resolution to this “tier” problems is assigning every “constraint” (thing that you value) an abstract variable, generating a polynomial algebra in some ungodly number of variables, and then assigning a weight function to that algebra, which is essentially assigning every variable an ordinal number, as you’ve been doing.
Just as perspective on the abstract problem there are two confounders that I don’t see addressed
One is that every time you assign a value to something you should actually be assigning a distribution of possible values to it. It’s certainly possible to tighten these distributions in theory but I don’t think that human value systems actually do tighten them enough to reduce this to a mathematically tractable problem; and if they DO constrain things that much I’m certain we don’t know it. Which is just saying that this problem is going to end up with people reaching different intuitive conclusions.
Two is that it tends to be the case that these systems are wildly underspecified. If you do the appropriate statistics to figure out how people rank constraints, you don’t get an answer, you get some statistics about an answer, and the probability distributions on people’s preferences are WIIIIIIDE. In order to solve this problem in linguistics people use subject- and problem-specific methods to throw together ad hoc conclusions. So I guess these are really the same complaint; you shouldn’t be using single-value assignments and when you stop doing that you lose the computational precision that makes talking about ordinal numbers really interesting.
(for reference my OT knowledge comes entirely from casual conversations with people who do it professionally; I’m fairly confident in these statements but I’d be open to contradiction from a linguist)
Decision theory with ordinals is actually well-studied and commonly used, specifically in language and grammar systems. See papers on Optimality Theory.
The resolution to this “tier” problems is assigning every “constraint” (thing that you value) an abstract variable, generating a polynomial algebra in some ungodly number of variables, and then assigning a weight function to that algebra, which is essentially assigning every variable an ordinal number, as you’ve been doing.
Just as perspective on the abstract problem there are two confounders that I don’t see addressed
One is that every time you assign a value to something you should actually be assigning a distribution of possible values to it. It’s certainly possible to tighten these distributions in theory but I don’t think that human value systems actually do tighten them enough to reduce this to a mathematically tractable problem; and if they DO constrain things that much I’m certain we don’t know it. Which is just saying that this problem is going to end up with people reaching different intuitive conclusions.
Two is that it tends to be the case that these systems are wildly underspecified. If you do the appropriate statistics to figure out how people rank constraints, you don’t get an answer, you get some statistics about an answer, and the probability distributions on people’s preferences are WIIIIIIDE. In order to solve this problem in linguistics people use subject- and problem-specific methods to throw together ad hoc conclusions. So I guess these are really the same complaint; you shouldn’t be using single-value assignments and when you stop doing that you lose the computational precision that makes talking about ordinal numbers really interesting.
(for reference my OT knowledge comes entirely from casual conversations with people who do it professionally; I’m fairly confident in these statements but I’d be open to contradiction from a linguist)
I think you mean ordinals, not cardinals.
Edited, thanks.