Ah, that would be it. (And I should have realized before that the linear prediction using logs would be different in this way). No, my formulas don’t relate to the log. I take the log for some measurement purposes but am dividing out my guessed formula for the multiplicative effect of each thing on the total, rather than subtracting a formula that relates to the log of it.
So, I guess you could check to see if these formulas work satisfactorily for you:
log(1-0.004*(Murphy’s Constant)^3) and log(1-10*abs((Local Value of Pi)-3.15))
In my graphs, I don’t see an effect that looks clearly non-random. Like, it could be wiggled a little bit but not with a systematic effect more than around a factor of 0.003 or so and not more than I could believe is due to chance. (To reduce random noise, though, I ought to extend to the full dataset rather than the restricted set I am using).
Hm. I’m trying to predict log of performance (technically negative log of performance) rather than performance directly, but I’d imagine you are too?
If you plot your residuals against pi/murphy, like the graphs I have above, do you see no remaining effect?
Ah, that would be it. (And I should have realized before that the linear prediction using logs would be different in this way). No, my formulas don’t relate to the log. I take the log for some measurement purposes but am dividing out my guessed formula for the multiplicative effect of each thing on the total, rather than subtracting a formula that relates to the log of it.
So, I guess you could check to see if these formulas work satisfactorily for you:
log(1-0.004*(Murphy’s Constant)^3) and log(1-10*abs((Local Value of Pi)-3.15))
In my graphs, I don’t see an effect that looks clearly non-random. Like, it could be wiggled a little bit but not with a systematic effect more than around a factor of 0.003 or so and not more than I could believe is due to chance. (To reduce random noise, though, I ought to extend to the full dataset rather than the restricted set I am using).