This debate brings to mind one of the more interesting differences between the hard sciences and other fields. This occurs when you firmly believe A, someone makes a compelling argument, and within a few seconds you firmly believe not-A, to the point of arguing for not-A with even more vigor that you used for A just a few seconds ago.
(Most recent example from my own life that springs to mind: “It seems incredibly improbable that any Turing machine of size 100 could encode a complete solution to the halting problem for all Turing machines of size up to almost 100… oh. Nevermind.”)
So what’s the program? Is it the one that runs every turing machine up to length 100 for BusyBeaver(100) steps, and gets the number BusyBeaver(100) by running the BusyBeaver_100 program whose source code is hardcoded into it? That would be of length 100+c for some constant c, but maybe you didn’t think the constant was worth mentioning.
Pretty sure that also happens in fields other than the hard sciences. For example, it is said that converts to a religion are usually much more fervent than people who grew up with it (though there’s an obvious selection bias).
(The advanced, dark-artsy version of this is claiming with a straight face to never have believed A in the first place, and hope the listener trusts what you’re saying now more than their memory of what you said earlier, and if it doesn’t work, claim they had misunderstood you. My maternal grandpa always tries to use that on my father, and almost always fails, but if he does that I guess it’s because it does work on other people.)
(Most recent example from my own life that springs to mind: “It seems incredibly improbable that any Turing machine of size 100 could encode a complete solution to the halting problem for all Turing machines of size up to almost 100… oh. Nevermind.”)
That does (did?) seem improbable to me. I’d have expected n needed to be far larger than 100 before the overhead became negligible enough for ‘almost n’ to fit (ie. size 10,000 gives almost 10,000 would have seemed a lot more likely than size 100 gives almost 100). Do I need to update in the direction of optimal Turing machine code requiring very few bits?
You mentally threw away relevant information. ie. You merely made yourself incapable of thinking about what is claimed about the size of c relative to 100. That’s fine but ought to indicate to you that you have little useful information to add in response to a comment that amounts to an expression of curious surprise that (c << 100).
(and interpreted “almost N” in the obvious-in-the-context way).
Where the context suggests it can be interpreted as an example of the Eliezer’s-edits bug?
I’ve also found this to be medium evidence that I’m not as informed about the subject as I thought that I was, so I back down by confidence somewhat. If I recently made an error that would have resulted in something very bad happening, I should be very careful about thinking that my next design is safe.
-- Lou Scheffer
(Most recent example from my own life that springs to mind: “It seems incredibly improbable that any Turing machine of size 100 could encode a complete solution to the halting problem for all Turing machines of size up to almost 100… oh. Nevermind.”)
So what’s the program? Is it the one that runs every turing machine up to length 100 for BusyBeaver(100) steps, and gets the number BusyBeaver(100) by running the BusyBeaver_100 program whose source code is hardcoded into it? That would be of length 100+c for some constant c, but maybe you didn’t think the constant was worth mentioning.
Well, it’s still encoded. But I actually meant to say “almost 100” in the original. And yes, that’s the answer.
Pretty sure that also happens in fields other than the hard sciences. For example, it is said that converts to a religion are usually much more fervent than people who grew up with it (though there’s an obvious selection bias).
(The advanced, dark-artsy version of this is claiming with a straight face to never have believed A in the first place, and hope the listener trusts what you’re saying now more than their memory of what you said earlier, and if it doesn’t work, claim they had misunderstood you. My maternal grandpa always tries to use that on my father, and almost always fails, but if he does that I guess it’s because it does work on other people.)
The operative glory is doing it in five seconds.
And, being right.
That’s harder to distinguish from the outside.
That does (did?) seem improbable to me. I’d have expected n needed to be far larger than 100 before the overhead became negligible enough for ‘almost n’ to fit (ie. size 10,000 gives almost 10,000 would have seemed a lot more likely than size 100 gives almost 100). Do I need to update in the direction of optimal Turing machine code requiring very few bits?
In general, probably yes. Have you checked out the known parts of the Busy Beaver sequence? Be sure to guess what you expect to see before you look.
In specific, I don’t know the size of the constant c.
I mentally replaced “100” with “N” anyway (and interpreted “almost N” in the obvious-in-the-context way).
You mentally threw away relevant information. ie. You merely made yourself incapable of thinking about what is claimed about the size of c relative to 100. That’s fine but ought to indicate to you that you have little useful information to add in response to a comment that amounts to an expression of curious surprise that (c << 100).
Where the context suggests it can be interpreted as an example of the Eliezer’s-edits bug?
I hadn’t read the before-the-edit version of the comment.
I’ve also found this to be medium evidence that I’m not as informed about the subject as I thought that I was, so I back down by confidence somewhat. If I recently made an error that would have resulted in something very bad happening, I should be very careful about thinking that my next design is safe.