It bothers me that much of the focus on the “high ratios of positive to negative feedback lead to improved performance” hypothesis doesn’t seem to even mention, much less rule out, the much more obvious “improved performance leads to higher ratios of positive to negative feedback” explanation for correlations between the two. It seems like it would be quite easy to go overboard if that first interpretation were believed to be more widely true than it actually is...
I think that Eliezer’s reaction is more along the lines of “changing your behaviour when your beliefs change because of new evidence is a good thing!” rather than “this is definitely strong evidence of the given conclusion!”
If you believe that updating on new evidence to be virtuous, it’s easy to fall into the trap of overdoing it without checking whether your new evidence is any good.
It’s highly consistent with my actual real life experience, but I hadn’t noticed it before.
The Canadians weren’t able to do a controlled experiment, but the results remain suggestive, because we know that the feedback was something that was changed in an intervention.
Just because something changed at the same time as you did something to change it isn’t proof, but it is evidence.
Upon which I feel obligated to point out that the fact that he used his main account to say ‘Yay’ is only weak evidence against him already having a second account.
It bothers me that much of the focus on the “high ratios of positive to negative feedback lead to improved performance” hypothesis doesn’t seem to even mention, much less rule out, the much more obvious “improved performance leads to higher ratios of positive to negative feedback” explanation for correlations between the two. It seems like it would be quite easy to go overboard if that first interpretation were believed to be more widely true than it actually is...
And given Luke_A_Somers’ and Eliezer’s reactions, I’m more than slightly scared for the state of LessWrong. But maybe I missed something.
I think that Eliezer’s reaction is more along the lines of “changing your behaviour when your beliefs change because of new evidence is a good thing!” rather than “this is definitely strong evidence of the given conclusion!”
If you believe that updating on new evidence to be virtuous, it’s easy to fall into the trap of overdoing it without checking whether your new evidence is any good.
Whee, I’m scary.
More seriously, when am I allowed to update?
It’s highly consistent with my actual real life experience, but I hadn’t noticed it before.
The Canadians weren’t able to do a controlled experiment, but the results remain suggestive, because we know that the feedback was something that was changed in an intervention.
Just because something changed at the same time as you did something to change it isn’t proof, but it is evidence.
Not to mention the state of SIAI.
Eliezer seriously needs a second LW account, so that he can say “Yay!” without increasing existential risk.
Upon which I feel obligated to point out that the fact that he used his main account to say ‘Yay’ is only weak evidence against him already having a second account.