This is a good example of people relying too much on linear regressions. You can’t interpret coefficients of linear regressions the way they do. They’re good for exploratory data analysis and their interpretations of the coefficients are reasonable hypotheses to consider, but they should actually test them.
Covariance is one keyword. If the data is linear but not maximal dimensional, then you get covariance. This is to be expected in situations like this, where you convert a scale to a bunch of booleans. ETA: and even if one did not expect adjacent values to be correlated, that the total number of ratings is about the same is a reduction of dimension.
But if the data is not linear, many more things can go wrong. I don’t know names for them.
Matt Simpson: I suppose that could solve the problem of covariance, but that’s not what I’m talking about.
It would be interesting to see higher-dimensional plots. For example, the scatter plot of average-score vs the number of messages could be colored according to the number of ratings of 1. And similar charts for other ratings.
Thanks for the pointer, I think I get the idea. To check: It is a difference between whether many votes of 1 lead to more messages, or whether they only lead to more messages if at the same time there are many votes for 5. As in the dataset there were many woman who at the same time got many 1s and 5s, and many messages, the linear regression resulted in absurd values, which just happen to match the data-set, but do not model the (non-linear) reality, as for this one would have to consider another dimension, like “disagreement”, or whatever. And of course, all this would be much more clear to me if I’d sit down and just read a damn ultra-basic statistics book and learn that stuff. Gah.
That is a good example of an error that one could make from believing the data is linear (and thus trusting the regression coefficients) when it is not linear. If their non-linear model were correct, we would get regression coefficients like what we see. If we trusted the regression coefficients too much (implicitly assuming the data is linear), then the positive coefficient on the number of 1s would suggest that having all 1s is good. But it is not. Their model says it is not and the data says it is not (eg, the scatter plot).
I think that is what you are saying. It is certainly not their mistake—they believe their model. I am not saying anything so specific, but it is the type of mistake that I am talking about. Also, there are lots of non-linear models that lead to the same regression.
I interpreted this comment as saying that they should test whether the coefficients are equal to 0 before interpreting them. There’s evidence that they did this if you look at the “if you’re into algebra” sidebar on the right—they dropped the m3 variable because it had a large p-value.
This is a good example of people relying too much on linear regressions. You can’t interpret coefficients of linear regressions the way they do. They’re good for exploratory data analysis and their interpretations of the coefficients are reasonable hypotheses to consider, but they should actually test them.
I’m interested. Got some time to keyword some details?
Covariance is one keyword. If the data is linear but not maximal dimensional, then you get covariance. This is to be expected in situations like this, where you convert a scale to a bunch of booleans. ETA: and even if one did not expect adjacent values to be correlated, that the total number of ratings is about the same is a reduction of dimension.
But if the data is not linear, many more things can go wrong. I don’t know names for them.
Matt Simpson: I suppose that could solve the problem of covariance, but that’s not what I’m talking about.
It would be interesting to see higher-dimensional plots. For example, the scatter plot of average-score vs the number of messages could be colored according to the number of ratings of 1. And similar charts for other ratings.
Thanks for the pointer, I think I get the idea. To check: It is a difference between whether many votes of 1 lead to more messages, or whether they only lead to more messages if at the same time there are many votes for 5. As in the dataset there were many woman who at the same time got many 1s and 5s, and many messages, the linear regression resulted in absurd values, which just happen to match the data-set, but do not model the (non-linear) reality, as for this one would have to consider another dimension, like “disagreement”, or whatever. And of course, all this would be much more clear to me if I’d sit down and just read a damn ultra-basic statistics book and learn that stuff. Gah.
That is a good example of an error that one could make from believing the data is linear (and thus trusting the regression coefficients) when it is not linear. If their non-linear model were correct, we would get regression coefficients like what we see. If we trusted the regression coefficients too much (implicitly assuming the data is linear), then the positive coefficient on the number of 1s would suggest that having all 1s is good. But it is not. Their model says it is not and the data says it is not (eg, the scatter plot).
I think that is what you are saying. It is certainly not their mistake—they believe their model. I am not saying anything so specific, but it is the type of mistake that I am talking about. Also, there are lots of non-linear models that lead to the same regression.
I interpreted this comment as saying that they should test whether the coefficients are equal to 0 before interpreting them. There’s evidence that they did this if you look at the “if you’re into algebra” sidebar on the right—they dropped the m3 variable because it had a large p-value.
Is that what you were getting at?
edit: typo