Interesting article, but I’m not so sure about the “cache” analogy. A typical cache in computer science has two major differences with the effect you’re pointing to :
A cache stores the result of a computation. Result of a complex algorithm, of a database of external server query, of disk read, … but the computation is done once and then the result is stored for later used. Very few cache in computer science are caching results that comes from elsewhere but that were not computed at least once. While in your case, it’s not “I did once the complex job of thinking about love and rationality, I concluded love is not rational, so I cached that computation, and later on I reuse it” but “I heard that love is not rational, I didn’t do the computation, but still I stored the result”.
As a consequence of 1., a cached result in computer science is (almost) never wrong. It may be obsoleted (an old version of the Internet page), but not wrong (that old version was the correct one when you fetched it). In the cases described by the article, the “cached thought” are wrong values stored in the cache, not just obsoleted values.
What you refer to sounds more like a cache poisoning attack than the normal operation of a caching system.
I don’t know how to rephrase the “cached thoughts” expression into something more accurate but still as potent as an expression, so I’ll stick with your “cached thoughts” for now, but I’m uncomfortable with it because of those two differences.
if we decouple the cost of caching into “was true but is false” and “was never true”, it may be that one dominates the other in likelihood. so maybe, the most efficient solution to the “cached thought” problem is not rethinking things, but ignoring most things by default. this, however, has the opportunity cost of false negatives.
i’ve personally found that i am very dependent on cached thoughts when learning/doing something new (not necessarily bad). like breadth over depth. what i do is try to force each cached thought to have a contradictory, or at least very different, twin.
e.g. though i have never coded in it, if i hear “C++”, i’ll (try to) think both “not worth it, too unsafe and errorprone” and “so worth it, speed and libraries”. whenever i don’t have enough data to have a strong opinion, i must say that i am ok with caching thoughts, as long as i know they are cached and i try to cache “contradictory twins” together.
Interesting article, but I’m not so sure about the “cache” analogy. A typical cache in computer science has two major differences with the effect you’re pointing to :
A cache stores the result of a computation. Result of a complex algorithm, of a database of external server query, of disk read, … but the computation is done once and then the result is stored for later used. Very few cache in computer science are caching results that comes from elsewhere but that were not computed at least once. While in your case, it’s not “I did once the complex job of thinking about love and rationality, I concluded love is not rational, so I cached that computation, and later on I reuse it” but “I heard that love is not rational, I didn’t do the computation, but still I stored the result”.
As a consequence of 1., a cached result in computer science is (almost) never wrong. It may be obsoleted (an old version of the Internet page), but not wrong (that old version was the correct one when you fetched it). In the cases described by the article, the “cached thought” are wrong values stored in the cache, not just obsoleted values.
What you refer to sounds more like a cache poisoning attack than the normal operation of a caching system.
I don’t know how to rephrase the “cached thoughts” expression into something more accurate but still as potent as an expression, so I’ll stick with your “cached thoughts” for now, but I’m uncomfortable with it because of those two differences.
indeed.
if we decouple the cost of caching into “was true but is false” and “was never true”, it may be that one dominates the other in likelihood. so maybe, the most efficient solution to the “cached thought” problem is not rethinking things, but ignoring most things by default. this, however, has the opportunity cost of false negatives.
i’ve personally found that i am very dependent on cached thoughts when learning/doing something new (not necessarily bad). like breadth over depth. what i do is try to force each cached thought to have a contradictory, or at least very different, twin.
e.g. though i have never coded in it, if i hear “C++”, i’ll (try to) think both “not worth it, too unsafe and errorprone” and “so worth it, speed and libraries”. whenever i don’t have enough data to have a strong opinion, i must say that i am ok with caching thoughts, as long as i know they are cached and i try to cache “contradictory twins” together.