Because a huge percentage of the stuff that I tend to be automatically certain of is, it turns out, totally wrong and deluded.
There is a very large amount of stuff that one is automatically certain of that is correct, though trivial, data like “liquid water is wet”. I’m not sure how one would even practically quantify an analysis of what fraction of the statements one is certain of are or are not true. Even if one could efficiently test them, how would one list them (in the current state of science—tracing a full human neural network (and then converting its beliefs into a list of testable statements) is beyond our current capabilities).
I’m curious about this “liquid water is wet” statement. Obviously I agree, but for the sake of argument, could you taboo “is” and tell me the statement again? I’m trying to understand how your algorithm feels from the inside.
If you’re curious how to quantify fractions of statements, you might enjoy this puzzle I heard once. Suppose you’re an ecological researcher and you need to know the number of fish in a large lake. How would you get a handle on that number?
One of the parts of “liquid water is wet” is that a droplet of it will spread out on many common surfaces—salt, paper, cotton, etc. Yes, it is a bit tricky to unpack what is meant by”wet”—perhaps some other properties, like not withstanding shear are also folded in—but I don’t think that it is just a tautology, with “wet” being defined as the set of properties that liquid water has.
Re the catch/count/mark/release/recapture/count puzzle—the degree to which that is feasible depends on how well one can do (reasonably) unbiased sampling. I’m skeptical that that will work well with the set of testable statements that one is automatically certain of.
There is a very large amount of stuff that one is automatically certain of that is correct, though trivial, data like “liquid water is wet”. I’m not sure how one would even practically quantify an analysis of what fraction of the statements one is certain of are or are not true. Even if one could efficiently test them, how would one list them (in the current state of science—tracing a full human neural network (and then converting its beliefs into a list of testable statements) is beyond our current capabilities).
I’m curious about this “liquid water is wet” statement. Obviously I agree, but for the sake of argument, could you taboo “is” and tell me the statement again? I’m trying to understand how your algorithm feels from the inside.
If you’re curious how to quantify fractions of statements, you might enjoy this puzzle I heard once. Suppose you’re an ecological researcher and you need to know the number of fish in a large lake. How would you get a handle on that number?
One of the parts of “liquid water is wet” is that a droplet of it will spread out on many common surfaces—salt, paper, cotton, etc. Yes, it is a bit tricky to unpack what is meant by”wet”—perhaps some other properties, like not withstanding shear are also folded in—but I don’t think that it is just a tautology, with “wet” being defined as the set of properties that liquid water has.
Re the catch/count/mark/release/recapture/count puzzle—the degree to which that is feasible depends on how well one can do (reasonably) unbiased sampling. I’m skeptical that that will work well with the set of testable statements that one is automatically certain of.