Assuming that the limit is in the PS/2 protocol (and not in the keyboard hardware—Clark may have quietly replaced the keyboard on his desktop with a high-speed variant that he’d built himself, but it still needs to talk to the computer using a known protocol);
My recently primed munchkin instinct can’t help but notice that the analysis given doesn’t remotely approach the limits specified here. Specifically, it tacitly assumes that Clark uses only the stock standard software that everyone else uses. In fact, it even assumes that Clark doesn’t use even the most rudimentary macro or autocomplete features built in to standard wordpressors!
Assuming that at some point in his life Clark spent several minutes coding (at the limits you calculate) in anticipation of at some point in the future wishing to type fast all subsequent text input via the PS/2 protocol could occur a couple of orders of magnitude faster. Optimisations would include:
Abandon the preconception that pressing the key with the “A” painted on it puts the letter ‘a’ in the text, or any of the other keys for that matter—especially the ones that aren’t so common! Every key press is log2(number of keys) bits of information. Use all of it.
A key_press uses 33 bits of bandwidth total but key_press isn’t a discrete operation. 11 bits are used for key_down and 22 for key_up but these don’t need to follow each other directly (for example see conventional usage of shift, control and alt). As far as the PS/2 protocol is concerned key_up supplies another log2(number of keys) bits of information (for the cost of 22 bits of bandwidth).
Given that Clark constructed his own hardware he could easily make use of the full 2*log2(number of keys) bits of information per 33 bits of information by making his keyboard send only a key_down on the first keypress and a key_up on the second keypress (alternating).
If Clark is using a standard keyboard then he can still send more information via key_up but is now limited by fingers. Since he has only 10 fingers, before every keydown (after the first 10) he can send one or more key_ups. Which finger(s) he choses to lift up is influenced by the proximity of the keys to each other. Optimal use of this additional information would use a custom weighted “twister) protocol” that extracts every bit of information available in the choice “left index finger T” instead of “right pointer T” when both were bio-mechanically plausible options. For this reason, if Clark is using a standard keyboard I recommend he use the smallest layout possible. A laptop’s keys being cramped is a feature!
Human languages (like English) are grossly inefficient in terms of symbol use. Shannon (of shannon entropy) fame) measured the entropy of English text at between 1 and 1.5 bits per letter even when using mere human subjects guessing what the next letter would be. Some letters are used way too much, simple combinations of letters like “atbyl” have no meaning, some words combinations are more likely than others andIcanreadthiswithoutdifficulty. If bandwidth rather than processing power is the limit compression is called for. I estimate that Clark’s Text Over PS/2 Protocol ought to be at least as efficient as Shannon’s “subjects can guess what is coming next” findings for typical text while remaining lossless (albeit less efficient) even under unusual input.
Since Clark wants to maintain a secret identity his keyboard must be required to operate normally except when he is typing fast. This is easy enough to accomplish via any one of:
An unmarked button that requires superhuman strength to press.
A keyboard combination (F12 D u _ @ F3 W * & etc) that will not occur randomly but still takes negligible time to enter.
The software just starts interpreting the input differently once a sufficient number of keys have been input in rapid succession. (This seems preferable.)
Given that Clark constructed his own hardware he could easily make use of the full 2*log2(number of keys) bits of information per 33 bits of information by making his keyboard send only a key_down on the first keypress and a key_up on the second keypress (alternating).
That wouldn’t help; he can’t then choose to send “key_up (a)” followed by “key_up (a)”, there has to be a “key_down (a)” inbetween.
He could, of course, simply elect to have his personal keyboard ignore key_ups and send only the shorter key_down codes, meaning that he has only 11 bits per character. Aside from that minor quibble, though, you make several excellent points.
If he’s writing his own keyboard driver, he can take this even further, and have his keyboard (when in speed mode) deliver a different set of scancodes; he can pick out 32 keys and have each of them deliver a different 5-byte code (hitting any key outside of those 32 automatically turns off speed mode). In this manner, his encoding efficiency is limited only by processing power (his system will have to decrypt the input stream pretty quickly) and clock rate (assuming he doesn’t mess with the desktop hardware, he’d probably still have to stick to 16.7kHz). Since modern processors run in the GHz range, I expect that the keyboard clock rate will be the limiting factor.
Unless he starts messing with his desktop’s hardware, of course.
Given that Clark constructed his own hardware he could easily make use of the full 2*log2(number of keys) bits of information per 33 bits of information by making his keyboard send only a key_down on the first keypress and a key_up on the second keypress (alternating).
That wouldn’t help; he can’t then choose to send “keyup (a)” followed by “keyup (a)”, there has to be a “key_down (a)” inbetween.
You seem to have read the text incorrectly. The passage you quote explicitly mentions sending both key_down and key_up and even uses the word ‘alternating’. ie. All that is changing is a relatively minor mechanical detail of what kind of button each key behaves as. If necessary, imagine that each key behaves something like the button on a retractable ball point pen. First press down. Second press up. All that is done is removing the need to actually hold each key down with a finger while they are in the down state.
I notice that I am confused. You say that I have read the original text incorrectly, and then you post a clarification that exactly matches my original interpretation of the text.
I see two possible causes for this. Either I have misunderstood you (as you state) and, moreover, continue to misunderstand you in the same way; or you have misunderstood me.
Therefore, I shall re-state my point in more detail, in the hope of clearing this up.
Consider the ‘a’ key. This point applies to all keys equally, of course, but for simplicity let us consider a single arbitrary key.
Under your proposed keyboard, the following is true.
The first time Clark presses ‘a’, the keyboard sends key_down (a). This is 11 bits, encoding the message ‘key “a” has been pressed’
The second time Clark presses ‘a’, the keyboard sends key_up (a). This is 22 bits, encoding the message ‘key “a” has been pressed’
The third time Clark presses ‘a’, the keyboard sends key_down (a). This is 11 bits, encoding the message ‘key “a” has been pressed’
The fourth time Clark presses ‘a’, the keyboard sends key_up (a). This is 22 bits, encoding the message ‘key “a” has been pressed’
I therefore note that replacing every key_up with a key_down saves a further 11 bits per 2 keystrokes, on average, for no loss of information.
I see two possible causes for this. Either I have misunderstood you (as you state) and, moreover, continue to misunderstand you in the same way; or you have misunderstood me.
Both is also a possibility (and from my re-analysis seems to be the most likely.)
Allow me to abandon inferences about interpretations and just respond to some words.
Given that Clark constructed his own hardware he could easily make use of the full 2*log2(number of keys) bits of information per 33 bits of information by making his keyboard send only a keydown on the first keypress and a keyup on the second keypress (alternating).
That wouldn’t help;
This claim is false. It would help a lot! It improves bandwidth by a factor of a little under two over not the alternative making optimal use of the key_up signal as well as the key_downs. As for how much improvement the keyboard change is over merely using all 10 fingers optimally… the math gets complicated and is dependent on things like finger length.
I therefore note that replacing every keyup with a keydown saves a further 11 bits per 2 keystrokes, on average, for no loss of information.
I agree. If just abandoning key_up scancodes altogether is permitted then obviously do so! I used them because from what little I understand of the PS/2-keyboard protocol from reading CCC’s introduction then a little additional research the key_ups are not optional and decided that leaving them out would violate CCC’s assumptions. I was incidentally rather shocked at the way the protocol worked. 22 bits for a key_up? Why? That’s a terrible way to do it! (Charity suggests to me that bandwidth must not have been an efficient target of optimisation resources at the design time.)
Given that Clark constructed his own hardware he could easily make use of the full 2*log2(number of keys) bits of information per 33 bits of information by making his keyboard send only a keydown on the first keypress and a keyup on the second keypress (alternating).
That wouldn’t help;
This claim is false.
Yes, you are right. On re-reading and looking over this again, I see that misread you there; for some reason (even after I knew that misreading was likely) I read that as 2log2(number of keys) bits of information per keypress* instead of per 33 bits of information.
I agree. If just abandoning key_up scancodes altogether is permitted then obviously do so! I used them because from what little I understand of the PS/2-keyboard protocol from reading CCC’s introduction then a little additional research the key_ups are not optional and decided that leaving them out would violate CCC’s assumptions.
Ah, right. My apologies; I’d though that the idea of drawing log2(number of keys) bits of information per keypress already implied a rejection of the assumption that the PS/2 protocol would be used.
I was incidentally rather shocked at the way the protocol worked. 22 bits for a key_up? Why? That’s a terrible way to do it! (Charity suggests to me that bandwidth must not have been an efficient target of optimisation resources at the design time.)
Well, to be fair, the PS/2 protocol is only intended to be able to keep up with normal human typing speed. The bandwidth limit sits at over 80 characters per second, and I don’t think that anyone outside the realm of fiction is likely to ever type at that speed, even just mashing keys randomly.
My recently primed munchkin instinct can’t help but notice that the analysis given doesn’t remotely approach the limits specified here. Specifically, it tacitly assumes that Clark uses only the stock standard software that everyone else uses. In fact, it even assumes that Clark doesn’t use even the most rudimentary macro or autocomplete features built in to standard wordpressors!
Assuming that at some point in his life Clark spent several minutes coding (at the limits you calculate) in anticipation of at some point in the future wishing to type fast all subsequent text input via the PS/2 protocol could occur a couple of orders of magnitude faster. Optimisations would include:
Abandon the preconception that pressing the key with the “A” painted on it puts the letter ‘a’ in the text, or any of the other keys for that matter—especially the ones that aren’t so common! Every key press is log2(number of keys) bits of information. Use all of it.
A key_press uses 33 bits of bandwidth total but key_press isn’t a discrete operation. 11 bits are used for key_down and 22 for key_up but these don’t need to follow each other directly (for example see conventional usage of shift, control and alt). As far as the PS/2 protocol is concerned key_up supplies another log2(number of keys) bits of information (for the cost of 22 bits of bandwidth).
Given that Clark constructed his own hardware he could easily make use of the full 2*log2(number of keys) bits of information per 33 bits of information by making his keyboard send only a key_down on the first keypress and a key_up on the second keypress (alternating).
If Clark is using a standard keyboard then he can still send more information via key_up but is now limited by fingers. Since he has only 10 fingers, before every keydown (after the first 10) he can send one or more key_ups. Which finger(s) he choses to lift up is influenced by the proximity of the keys to each other. Optimal use of this additional information would use a custom weighted “twister) protocol” that extracts every bit of information available in the choice “left index finger T” instead of “right pointer T” when both were bio-mechanically plausible options. For this reason, if Clark is using a standard keyboard I recommend he use the smallest layout possible. A laptop’s keys being cramped is a feature!
Human languages (like English) are grossly inefficient in terms of symbol use. Shannon (of shannon entropy) fame) measured the entropy of English text at between 1 and 1.5 bits per letter even when using mere human subjects guessing what the next letter would be. Some letters are used way too much, simple combinations of letters like “atbyl” have no meaning, some words combinations are more likely than others andIcanreadthiswithoutdifficulty. If bandwidth rather than processing power is the limit compression is called for. I estimate that Clark’s Text Over PS/2 Protocol ought to be at least as efficient as Shannon’s “subjects can guess what is coming next” findings for typical text while remaining lossless (albeit less efficient) even under unusual input.
Since Clark wants to maintain a secret identity his keyboard must be required to operate normally except when he is typing fast. This is easy enough to accomplish via any one of:
An unmarked button that requires superhuman strength to press.
A keyboard combination (F12 D u _ @ F3 W * & etc) that will not occur randomly but still takes negligible time to enter.
The software just starts interpreting the input differently once a sufficient number of keys have been input in rapid succession. (This seems preferable.)
That wouldn’t help; he can’t then choose to send “key_up (a)” followed by “key_up (a)”, there has to be a “key_down (a)” inbetween.
He could, of course, simply elect to have his personal keyboard ignore key_ups and send only the shorter key_down codes, meaning that he has only 11 bits per character. Aside from that minor quibble, though, you make several excellent points.
If he’s writing his own keyboard driver, he can take this even further, and have his keyboard (when in speed mode) deliver a different set of scancodes; he can pick out 32 keys and have each of them deliver a different 5-byte code (hitting any key outside of those 32 automatically turns off speed mode). In this manner, his encoding efficiency is limited only by processing power (his system will have to decrypt the input stream pretty quickly) and clock rate (assuming he doesn’t mess with the desktop hardware, he’d probably still have to stick to 16.7kHz). Since modern processors run in the GHz range, I expect that the keyboard clock rate will be the limiting factor.
Unless he starts messing with his desktop’s hardware, of course.
You seem to have read the text incorrectly. The passage you quote explicitly mentions sending both key_down and key_up and even uses the word ‘alternating’. ie. All that is changing is a relatively minor mechanical detail of what kind of button each key behaves as. If necessary, imagine that each key behaves something like the button on a retractable ball point pen. First press down. Second press up. All that is done is removing the need to actually hold each key down with a finger while they are in the down state.
I notice that I am confused. You say that I have read the original text incorrectly, and then you post a clarification that exactly matches my original interpretation of the text.
I see two possible causes for this. Either I have misunderstood you (as you state) and, moreover, continue to misunderstand you in the same way; or you have misunderstood me.
Therefore, I shall re-state my point in more detail, in the hope of clearing this up.
Consider the ‘a’ key. This point applies to all keys equally, of course, but for simplicity let us consider a single arbitrary key.
Under your proposed keyboard, the following is true.
The first time Clark presses ‘a’, the keyboard sends key_down (a). This is 11 bits, encoding the message ‘key “a” has been pressed’
The second time Clark presses ‘a’, the keyboard sends key_up (a). This is 22 bits, encoding the message ‘key “a” has been pressed’
The third time Clark presses ‘a’, the keyboard sends key_down (a). This is 11 bits, encoding the message ‘key “a” has been pressed’
The fourth time Clark presses ‘a’, the keyboard sends key_up (a). This is 22 bits, encoding the message ‘key “a” has been pressed’
I therefore note that replacing every key_up with a key_down saves a further 11 bits per 2 keystrokes, on average, for no loss of information.
Both is also a possibility (and from my re-analysis seems to be the most likely.)
Allow me to abandon inferences about interpretations and just respond to some words.
This claim is false. It would help a lot! It improves bandwidth by a factor of a little under two over not the alternative making optimal use of the key_up signal as well as the key_downs. As for how much improvement the keyboard change is over merely using all 10 fingers optimally… the math gets complicated and is dependent on things like finger length.
I agree. If just abandoning key_up scancodes altogether is permitted then obviously do so! I used them because from what little I understand of the PS/2-keyboard protocol from reading CCC’s introduction then a little additional research the key_ups are not optional and decided that leaving them out would violate CCC’s assumptions. I was incidentally rather shocked at the way the protocol worked. 22 bits for a key_up? Why? That’s a terrible way to do it! (Charity suggests to me that bandwidth must not have been an efficient target of optimisation resources at the design time.)
Yes, you are right. On re-reading and looking over this again, I see that misread you there; for some reason (even after I knew that misreading was likely) I read that as 2log2(number of keys) bits of information per keypress* instead of per 33 bits of information.
Ah, right. My apologies; I’d though that the idea of drawing log2(number of keys) bits of information per keypress already implied a rejection of the assumption that the PS/2 protocol would be used.
Well, to be fair, the PS/2 protocol is only intended to be able to keep up with normal human typing speed. The bandwidth limit sits at over 80 characters per second, and I don’t think that anyone outside the realm of fiction is likely to ever type at that speed, even just mashing keys randomly.
.
Characters per second and words per minute don’t match; wpm is typically calculated with 5 characters per word, so 80 cps would correspond to 960 wpm.