So the pseudo-quantities in your example are strength ratings on a 1-10 scale?
I actually think that’s acceptable, assuming the ratings on the scale are equally spaced, and the weights correspond to the spacing. For instance, space strengths out from 1 to 10 evenly, space weights out from 1 to 10 evenly (where 10 is the best, i.e., lightest), where each interval corresponds to roughly the same level of improvement in the prototype. Then assign weights to go along with how important an improvement is along one axis compared to the other. For instance, if improving strength one point on the scale is twice as valuable as improving weight, we can give strength a weight of 2, and computations like:
Option A, strength 3, weight 6, total score 2(3) + 6 = 12
Still have one degree of freedom. What if you ranked from 10-20? or −5 to 5? As a limiting case consider rankings 100-110: the thing with the highest preference (strength) would totally swamp the calculation, becoming the only concern.
Once you have scale and offset correctly calibrated, you still need to worry about nonlinearity. In this case (using rank indexes), the problem is even worse. Like I said, rank indexes lose information. What if they are all the same wieght but one is drastically lighter? Consider that the rankings are identical no matter how much difference there is. That’s not right. Using something approximating a real-valued ranking (rank from 1-10) instead of rank indicies reduces the problem to mere nonlinearity.
This is not as hard as FAI, but it’s harder than pulling random numbers out of your butt, multiplying them, and calling it a decision procedure.
I agree that ranking the weights from 1 to N is idiotic because it doesn’t respect the relative importance of each characteristic. However, changing the ratings from 101-110 for every scale will just add a constant to each option’s value:
Option A, strength 103, mass 106, total score 2(103) + 106 = 312
Option B, strength 105, mass 103, total score 2(105) + 103 = 313
(I changed ‘weight to ‘mass’ to avoid confusion with the other meaning of ‘weight’)
Using something approximating a real-valued ranking (rank from 1-10) instead of rank indicies reduces the problem to mere nonlinearity.
I assume you mean using values for the weights that correspond to importance, which isn’t necessarily 1-10. For instance, if strength is 100 times more important than mass, we’d need to have weights of 100 and 1.
You’re right that this assumes that the final quality is a linear function of the component attributes: we could have a situation where strength becomes less important when mass passes a certain threshold, for instance. But using a linear approximation is often a good first step at the very least.
Option A, strength 103, mass 106, total score 2(103) + 106 = 312
Option B, strength 105, mass 103, total score 2(105) + 103 = 313
Oops, I might have to look at that more closely. I think you are right. The shared offset cancels out.
I assume you mean using values for the weights that correspond to importance, which isn’t necessarily 1-10. For instance, if strength is 100 times more important than mass, we’d need to have weights of 100 and 1.
Using 100 and 1 for something that is 100 times more important is correct (assuming you are able to estimate the weights (100x is awful suspicious)). Idiot procedures were using rank indicies, not real-valued weights.
But using a linear approximation is often a good first step at the very least.
agree. Linearlity is a valid assumption
The error is using uncalibrated rating from 0-10, or worse, rank indicies. Linear valued rating from 0-10 has the potential to carry the information properly, but that does not mean people can produce calibrated estimates there.
So the pseudo-quantities in your example are strength ratings on a 1-10 scale?
I actually think that’s acceptable, assuming the ratings on the scale are equally spaced, and the weights correspond to the spacing. For instance, space strengths out from 1 to 10 evenly, space weights out from 1 to 10 evenly (where 10 is the best, i.e., lightest), where each interval corresponds to roughly the same level of improvement in the prototype. Then assign weights to go along with how important an improvement is along one axis compared to the other. For instance, if improving strength one point on the scale is twice as valuable as improving weight, we can give strength a weight of 2, and computations like:
Option A, strength 3, weight 6, total score 2(3) + 6 = 12
Option B, strength 5, weight 3, total score 2(5) + 3 = 13
make sense.
Still have one degree of freedom. What if you ranked from 10-20? or −5 to 5? As a limiting case consider rankings 100-110: the thing with the highest preference (strength) would totally swamp the calculation, becoming the only concern.
Once you have scale and offset correctly calibrated, you still need to worry about nonlinearity. In this case (using rank indexes), the problem is even worse. Like I said, rank indexes lose information. What if they are all the same wieght but one is drastically lighter? Consider that the rankings are identical no matter how much difference there is. That’s not right. Using something approximating a real-valued ranking (rank from 1-10) instead of rank indicies reduces the problem to mere nonlinearity.
This is not as hard as FAI, but it’s harder than pulling random numbers out of your butt, multiplying them, and calling it a decision procedure.
I agree that ranking the weights from 1 to N is idiotic because it doesn’t respect the relative importance of each characteristic. However, changing the ratings from 101-110 for every scale will just add a constant to each option’s value:
Option A, strength 103, mass 106, total score 2(103) + 106 = 312
Option B, strength 105, mass 103, total score 2(105) + 103 = 313
(I changed ‘weight to ‘mass’ to avoid confusion with the other meaning of ‘weight’)
I assume you mean using values for the weights that correspond to importance, which isn’t necessarily 1-10. For instance, if strength is 100 times more important than mass, we’d need to have weights of 100 and 1.
You’re right that this assumes that the final quality is a linear function of the component attributes: we could have a situation where strength becomes less important when mass passes a certain threshold, for instance. But using a linear approximation is often a good first step at the very least.
Remember that whenever you want a * for multiplying numbers together, you need to write \*.
Oops, I might have to look at that more closely. I think you are right. The shared offset cancels out.
Using 100 and 1 for something that is 100 times more important is correct (assuming you are able to estimate the weights (100x is awful suspicious)). Idiot procedures were using rank indicies, not real-valued weights.
agree. Linearlity is a valid assumption
The error is using uncalibrated rating from 0-10, or worse, rank indicies. Linear valued rating from 0-10 has the potential to carry the information properly, but that does not mean people can produce calibrated estimates there.