Does this mean that a type of suffering you and some others endure, such as OCD-type thought patterns, primes the understanding of that paragraph?
I recommend against it for that secondarily, but primarily because it probabilistically implies an overly lax conception of “understanding” and an unacceptably high tolerance for hard-to-test just-so speculation. (And if someone really understood what sort of themes I was getting at, they’d know that my disclaimer didn’t apply to them.) Edit: When I say “I recommend against it for that secondarily”, what I mean is, “sure, that sounds like a decent reason, and I guess it’s sort of possible that I implicitly thought of it at the time of writing”. Another equally plausible secondary reason would be that I was signalling that I wasn’t falling for the potential errors that primarily caused me to write the disclaimer in the first place.
Also, is there a collection of all Kaasisms somewhere?
I don’t think so, but you could read the entirety of his blog Black Belt Bayesian, or move to Chicago and try to win his favor at LW meetups by talking about the importance of thinking on the margin, or maybe pay him by the hour to be funny, or something. If I was assembling a team of 9 FAI programmers I’d probably hire Steven Kaas on the grounds that he is obviously somehow necessary.
I recommend against it for that secondarily, but primarily because it probabilistically implies an overly lax conception of “understanding” and an unacceptably high tolerance for hard-to-test just-so speculation. (And if someone really understood what sort of themes I was getting at, they’d know that my disclaimer didn’t apply to them.) Edit: When I say “I recommend against it for that secondarily”, what I mean is, “sure, that sounds like a decent reason, and I guess it’s sort of possible that I implicitly thought of it at the time of writing”. Another equally plausible secondary reason would be that I was signalling that I wasn’t falling for the potential errors that primarily caused me to write the disclaimer in the first place.
I don’t think so, but you could read the entirety of his blog Black Belt Bayesian, or move to Chicago and try to win his favor at LW meetups by talking about the importance of thinking on the margin, or maybe pay him by the hour to be funny, or something. If I was assembling a team of 9 FAI programmers I’d probably hire Steven Kaas on the grounds that he is obviously somehow necessary.