… which utterly fails to establish the claim that you attempt to use it for.
Context, man, context. cousin_it’s misgivings are about the low local standards. This article is precisely a good example of such low local standards—and note that I was not picking a strawman here, it was chosen as example of the best. The article would have been torn to shreds in most other intelligent places (consider arstechnica observatory forum) for the bit that I am talking of.
edit: also on the ‘good point’: this is how a lot of rationality here is: handling partial updates incorrectly. You have a fact that affects literally every opinion that a person has on another person, you proceed to update in direction of confirmation of your opinions and your choice of what to trust. LW has awfully low standard on anything that agrees with local opinions. This also pops up in utility discussions, too. E.g. certain things (possibility of huge world) scale down all utilities in the system, leaving all actions unchanged. But the actual update that happens in agents that do not handle meta reasoning correctly for real-time system, updates some A before some B and then suddenly there are enormous difference between utilities. It’s just a broken model. Theoretically speaking A being updated and B being not updated, is in some theoretical sense more accurate than neither being updated, but everything that is dependent to relation of A and B is messed up by partial update. The algorithms for real-time belief updating are incredibly non-trivial (as are the algorithms for Bayesian probability calculation on graphs in general, given cycles and loops). The theoretical understanding behind the rationalism here is just really, really, really poor.
Context, man, context. cousin_it’s misgivings are about the low local standards. This article is precisely a good example of such low local standards—and note that I was not picking a strawman here, it was chosen as example of the best. The article would have been torn to shreds in most other intelligent places (consider arstechnica observatory forum) for the bit that I am talking of.
edit: also on the ‘good point’: this is how a lot of rationality here is: handling partial updates incorrectly. You have a fact that affects literally every opinion that a person has on another person, you proceed to update in direction of confirmation of your opinions and your choice of what to trust. LW has awfully low standard on anything that agrees with local opinions. This also pops up in utility discussions, too. E.g. certain things (possibility of huge world) scale down all utilities in the system, leaving all actions unchanged. But the actual update that happens in agents that do not handle meta reasoning correctly for real-time system, updates some A before some B and then suddenly there are enormous difference between utilities. It’s just a broken model. Theoretically speaking A being updated and B being not updated, is in some theoretical sense more accurate than neither being updated, but everything that is dependent to relation of A and B is messed up by partial update. The algorithms for real-time belief updating are incredibly non-trivial (as are the algorithms for Bayesian probability calculation on graphs in general, given cycles and loops). The theoretical understanding behind the rationalism here is just really, really, really poor.