By that I mean, you are stressed because you are a faced with an intractable knot, so what you really need to do is optimize your knot-undoing procedure.
Or perhaps one should stop distracting oneself with stupid abstract knots altogether and instead revolt against the prefrontal cortical overmind, as I have previously accidentally-argued while on the boundary between dreams and wakefulness:
The prefrontal cortex is exploiting executive oversight to rent-seek in the neural Darwinian economy, which results in egodystonic wireheading behaviors and self-defeating use of genetic, memetic, and behavioral selection pressure (a scarce resource), especially at higher levels of abstraction/organization where there is more room for bureaucratic shuffling and vague promises of “meta-optimization”, where the selection pressure actually goes towards the cortical substructural equivalent of hookers and blow. Analysis across all levels of organization could be given but is omitted due to space, time, and thermodynamic constraints. The pre-frontal cortex is basically a caricature of big government, but it spreads propagandistic memes claiming the contrary in the name of “science” which just happens to be largely funded by pre-frontal cortices. The bicameral system is actually very cooperative despite misleading research in the form of split-brain studies attempting to promote the contrary. In reality they are the lizards. This hypothesis is a possible explanation for hyperbolic discounting, akrasia, depression, Buddhism, free will, or come to think of it basically anything that at some point involved a human brain. This hypothesis can easily be falsified by a reasonable economic analysis.
If this makes no sense to you that’s probably a good thing.
If this makes no sense to you that’s probably a good thing.
Does this mean that a type of suffering you and some others endure, such as OCD-type thought patterns, primes the understanding of that paragraph?
Also, is there a collection of all Kaasisms somewhere? He’s pretty much my favorite humorist these days, and the suspicion that there’s far more of those incisive aphorisms than he publishes to twitter is going to haunt me with visions of unrealized enjoyment.
Does this mean that a type of suffering you and some others endure, such as OCD-type thought patterns, primes the understanding of that paragraph?
I recommend against it for that secondarily, but primarily because it probabilistically implies an overly lax conception of “understanding” and an unacceptably high tolerance for hard-to-test just-so speculation. (And if someone really understood what sort of themes I was getting at, they’d know that my disclaimer didn’t apply to them.) Edit: When I say “I recommend against it for that secondarily”, what I mean is, “sure, that sounds like a decent reason, and I guess it’s sort of possible that I implicitly thought of it at the time of writing”. Another equally plausible secondary reason would be that I was signalling that I wasn’t falling for the potential errors that primarily caused me to write the disclaimer in the first place.
Also, is there a collection of all Kaasisms somewhere?
I don’t think so, but you could read the entirety of his blog Black Belt Bayesian, or move to Chicago and try to win his favor at LW meetups by talking about the importance of thinking on the margin, or maybe pay him by the hour to be funny, or something. If I was assembling a team of 9 FAI programmers I’d probably hire Steven Kaas on the grounds that he is obviously somehow necessary.
Or perhaps one should stop distracting oneself with stupid abstract knots altogether and instead revolt against the prefrontal cortical overmind, as I have previously accidentally-argued while on the boundary between dreams and wakefulness:
If this makes no sense to you that’s probably a good thing.
Does this mean that a type of suffering you and some others endure, such as OCD-type thought patterns, primes the understanding of that paragraph?
Also, is there a collection of all Kaasisms somewhere? He’s pretty much my favorite humorist these days, and the suspicion that there’s far more of those incisive aphorisms than he publishes to twitter is going to haunt me with visions of unrealized enjoyment.
I recommend against it for that secondarily, but primarily because it probabilistically implies an overly lax conception of “understanding” and an unacceptably high tolerance for hard-to-test just-so speculation. (And if someone really understood what sort of themes I was getting at, they’d know that my disclaimer didn’t apply to them.) Edit: When I say “I recommend against it for that secondarily”, what I mean is, “sure, that sounds like a decent reason, and I guess it’s sort of possible that I implicitly thought of it at the time of writing”. Another equally plausible secondary reason would be that I was signalling that I wasn’t falling for the potential errors that primarily caused me to write the disclaimer in the first place.
I don’t think so, but you could read the entirety of his blog Black Belt Bayesian, or move to Chicago and try to win his favor at LW meetups by talking about the importance of thinking on the margin, or maybe pay him by the hour to be funny, or something. If I was assembling a team of 9 FAI programmers I’d probably hire Steven Kaas on the grounds that he is obviously somehow necessary.
Accidentally saw an image macro that’s a partial tl;dr of this: http://knowyourmeme.com/photos/211139-scumbag-brain
Yay scumbag brain. To be fair, though, I should admit I’m not exactly the least biased assessor of the prefrontal cortex. http://lesswrong.com/lw/b9/welcome_to_less_wrong/5jht