Einstein didn’t come up with General Relativity that way. He didn’t even do the hard math himself. He came up with some little truths (e.g. equivalence, speed of light is constant, covariance, must reduce to Newtonian gravity in unexceptional cases), from a handful of results that didn’t seem to fit classical theory, and then he found a set of equations that fit.
Newtonian gravity provided heaps of data points and a handful of non-fits. Einstein bootstrapped on prior achievements like Newtonian gravity and special relativity and tweaked them to fit a handful of additional data points better. His confidence came from fitting 100% of the small available data set (something that wasn’t clear in the case of the cosmological constant), however small it may have been. The minimum bit hypothesis assumes that all bits are created equal. But they aren’t. Some bits advance the cause not at all, some bits advance it a great deal.
Similarly, the 27 bit rule for 100,000,000 people assumes that the bits have equal numbers of people who are yes and no on a question. In fact, some bits are more discriminating than others. “Have you ever been elected to an office that requires a statewide vote or been a Vice President?” (perhaps two bits of information), is going to eliminate 99.9999%+ of potential candidates for President, yet work nearly perfectly to dramatically narrow the field from the 100,000,000 eligible candidates. “Do you want to run for President?”, cuts another 90%+ of potential candidates.
Einstein was confident because his bits had greater discriminatory power than other bits of information. There are only so many ways it is logically possible to fit the data he had.
Similarly, the 27 bit rule for 100,000,000 people assumes that the bits have equal numbers of people who are yes and no on a question. In fact, some bits are more discriminating than others. “Have you ever been elected to an office that requires a statewide vote or been a Vice President?” (perhaps two bits of information), is going to eliminate 99.9999%+ of potential candidates for President, yet work nearly perfectly to dramatically narrow the field from the 100,000,000 eligible candidates. “Do you want to run for President?”, cuts another 90%+ of potential candidates.
It’s not an assumption, it’s a definition. Whatever is enough to cut your current set of candidates in half is “one bit”- the first bit will eliminate 50,000,000 people, the last bit will eliminate 1. An answer that reduces the set of candidates to .000001 times its original size contains 20 bits of information. (Notice that the question doesn’t have bits of information associated with it, since each possible answer reduces the candidate set by a different amount- if they said “no,” you acquired only a millionth of a bit of information.)
Einstein didn’t come up with General Relativity that way. He didn’t even do the hard math himself. He came up with some little truths (e.g. equivalence, speed of light is constant, covariance, must reduce to Newtonian gravity in unexceptional cases), from a handful of results that didn’t seem to fit classical theory, and then he found a set of equations that fit.
Newtonian gravity provided heaps of data points and a handful of non-fits. Einstein bootstrapped on prior achievements like Newtonian gravity and special relativity and tweaked them to fit a handful of additional data points better. His confidence came from fitting 100% of the small available data set (something that wasn’t clear in the case of the cosmological constant), however small it may have been. The minimum bit hypothesis assumes that all bits are created equal. But they aren’t. Some bits advance the cause not at all, some bits advance it a great deal.
Similarly, the 27 bit rule for 100,000,000 people assumes that the bits have equal numbers of people who are yes and no on a question. In fact, some bits are more discriminating than others. “Have you ever been elected to an office that requires a statewide vote or been a Vice President?” (perhaps two bits of information), is going to eliminate 99.9999%+ of potential candidates for President, yet work nearly perfectly to dramatically narrow the field from the 100,000,000 eligible candidates. “Do you want to run for President?”, cuts another 90%+ of potential candidates.
Einstein was confident because his bits had greater discriminatory power than other bits of information. There are only so many ways it is logically possible to fit the data he had.
It’s not an assumption, it’s a definition. Whatever is enough to cut your current set of candidates in half is “one bit”- the first bit will eliminate 50,000,000 people, the last bit will eliminate 1. An answer that reduces the set of candidates to .000001 times its original size contains 20 bits of information. (Notice that the question doesn’t have bits of information associated with it, since each possible answer reduces the candidate set by a different amount- if they said “no,” you acquired only a millionth of a bit of information.)