And the first thing that was done with this awesome fact here, was ‘update’ in the direction of trusting more the PUA community’s opinion on women, rather than women themselves, and that was done by author. That’s not even a sufficiently complete update, because the PUA community—especially the manipulative misogynists with zero morals and the ideal to become a clinical sociopath as per check list, along with their bragging that has selection bias and unscientific approach to data collection written all over it—is itself prone to typical mind fallacy (as well as a bunch of other fallacies) when they are seeing women as equally morally reprehensible beings as they themselves are.
This is a really good point …
This, cousin_it, is the case example why you shouldn’t be writing good work for LW.
… which utterly fails to establish the claim that you attempt to use it for.
… which utterly fails to establish the claim that you attempt to use it for.
Context, man, context. cousin_it’s misgivings are about the low local standards. This article is precisely a good example of such low local standards—and note that I was not picking a strawman here, it was chosen as example of the best. The article would have been torn to shreds in most other intelligent places (consider arstechnica observatory forum) for the bit that I am talking of.
edit: also on the ‘good point’: this is how a lot of rationality here is: handling partial updates incorrectly. You have a fact that affects literally every opinion that a person has on another person, you proceed to update in direction of confirmation of your opinions and your choice of what to trust. LW has awfully low standard on anything that agrees with local opinions. This also pops up in utility discussions, too. E.g. certain things (possibility of huge world) scale down all utilities in the system, leaving all actions unchanged. But the actual update that happens in agents that do not handle meta reasoning correctly for real-time system, updates some A before some B and then suddenly there are enormous difference between utilities. It’s just a broken model. Theoretically speaking A being updated and B being not updated, is in some theoretical sense more accurate than neither being updated, but everything that is dependent to relation of A and B is messed up by partial update. The algorithms for real-time belief updating are incredibly non-trivial (as are the algorithms for Bayesian probability calculation on graphs in general, given cycles and loops). The theoretical understanding behind the rationalism here is just really, really, really poor.
This is a really good point …
… which utterly fails to establish the claim that you attempt to use it for.
Context, man, context. cousin_it’s misgivings are about the low local standards. This article is precisely a good example of such low local standards—and note that I was not picking a strawman here, it was chosen as example of the best. The article would have been torn to shreds in most other intelligent places (consider arstechnica observatory forum) for the bit that I am talking of.
edit: also on the ‘good point’: this is how a lot of rationality here is: handling partial updates incorrectly. You have a fact that affects literally every opinion that a person has on another person, you proceed to update in direction of confirmation of your opinions and your choice of what to trust. LW has awfully low standard on anything that agrees with local opinions. This also pops up in utility discussions, too. E.g. certain things (possibility of huge world) scale down all utilities in the system, leaving all actions unchanged. But the actual update that happens in agents that do not handle meta reasoning correctly for real-time system, updates some A before some B and then suddenly there are enormous difference between utilities. It’s just a broken model. Theoretically speaking A being updated and B being not updated, is in some theoretical sense more accurate than neither being updated, but everything that is dependent to relation of A and B is messed up by partial update. The algorithms for real-time belief updating are incredibly non-trivial (as are the algorithms for Bayesian probability calculation on graphs in general, given cycles and loops). The theoretical understanding behind the rationalism here is just really, really, really poor.