The point here is that I think this is one part of what teaching me how to think is really supposed to mean. To be just a little less arrogant. To have just a little critical awareness about myself and my certainties. Because a huge percentage of the stuff that I tend to be automatically certain of is, it turns out, totally wrong and deluded. I have learned this the hard way, as I predict you will, too.
Because a huge percentage of the stuff that I tend to be automatically certain of is, it turns out, totally wrong and deluded.
There is a very large amount of stuff that one is automatically certain of that is correct, though trivial, data like “liquid water is wet”. I’m not sure how one would even practically quantify an analysis of what fraction of the statements one is certain of are or are not true. Even if one could efficiently test them, how would one list them (in the current state of science—tracing a full human neural network (and then converting its beliefs into a list of testable statements) is beyond our current capabilities).
I’m curious about this “liquid water is wet” statement. Obviously I agree, but for the sake of argument, could you taboo “is” and tell me the statement again? I’m trying to understand how your algorithm feels from the inside.
If you’re curious how to quantify fractions of statements, you might enjoy this puzzle I heard once. Suppose you’re an ecological researcher and you need to know the number of fish in a large lake. How would you get a handle on that number?
One of the parts of “liquid water is wet” is that a droplet of it will spread out on many common surfaces—salt, paper, cotton, etc. Yes, it is a bit tricky to unpack what is meant by”wet”—perhaps some other properties, like not withstanding shear are also folded in—but I don’t think that it is just a tautology, with “wet” being defined as the set of properties that liquid water has.
Re the catch/count/mark/release/recapture/count puzzle—the degree to which that is feasible depends on how well one can do (reasonably) unbiased sampling. I’m skeptical that that will work well with the set of testable statements that one is automatically certain of.
After describing
David Foster Wallace continues
There is a very large amount of stuff that one is automatically certain of that is correct, though trivial, data like “liquid water is wet”. I’m not sure how one would even practically quantify an analysis of what fraction of the statements one is certain of are or are not true. Even if one could efficiently test them, how would one list them (in the current state of science—tracing a full human neural network (and then converting its beliefs into a list of testable statements) is beyond our current capabilities).
I’m curious about this “liquid water is wet” statement. Obviously I agree, but for the sake of argument, could you taboo “is” and tell me the statement again? I’m trying to understand how your algorithm feels from the inside.
If you’re curious how to quantify fractions of statements, you might enjoy this puzzle I heard once. Suppose you’re an ecological researcher and you need to know the number of fish in a large lake. How would you get a handle on that number?
One of the parts of “liquid water is wet” is that a droplet of it will spread out on many common surfaces—salt, paper, cotton, etc. Yes, it is a bit tricky to unpack what is meant by”wet”—perhaps some other properties, like not withstanding shear are also folded in—but I don’t think that it is just a tautology, with “wet” being defined as the set of properties that liquid water has.
Re the catch/count/mark/release/recapture/count puzzle—the degree to which that is feasible depends on how well one can do (reasonably) unbiased sampling. I’m skeptical that that will work well with the set of testable statements that one is automatically certain of.