Agreed. The mapping from Amazon ratings to actual trustworthiness is pretty nonlinear.
Nonlinearity alone wouldn’t be a problem. The problem is that the mapping isn’t injective.
For Less Mathy Humans(tm) “100% trust between humans is not expressible by any Amazon rating” (I think)
Agreed. The mapping from Amazon ratings to actual trustworthiness is pretty nonlinear.
Nonlinearity alone wouldn’t be a problem. The problem is that the mapping isn’t injective.
For Less Mathy Humans(tm) “100% trust between humans is not expressible by any Amazon rating” (I think)