Given that Clark constructed his own hardware he could easily make use of the full 2*log2(number of keys) bits of information per 33 bits of information by making his keyboard send only a key_down on the first keypress and a key_up on the second keypress (alternating).
That wouldn’t help; he can’t then choose to send “key_up (a)” followed by “key_up (a)”, there has to be a “key_down (a)” inbetween.
He could, of course, simply elect to have his personal keyboard ignore key_ups and send only the shorter key_down codes, meaning that he has only 11 bits per character. Aside from that minor quibble, though, you make several excellent points.
If he’s writing his own keyboard driver, he can take this even further, and have his keyboard (when in speed mode) deliver a different set of scancodes; he can pick out 32 keys and have each of them deliver a different 5-byte code (hitting any key outside of those 32 automatically turns off speed mode). In this manner, his encoding efficiency is limited only by processing power (his system will have to decrypt the input stream pretty quickly) and clock rate (assuming he doesn’t mess with the desktop hardware, he’d probably still have to stick to 16.7kHz). Since modern processors run in the GHz range, I expect that the keyboard clock rate will be the limiting factor.
Unless he starts messing with his desktop’s hardware, of course.
Given that Clark constructed his own hardware he could easily make use of the full 2*log2(number of keys) bits of information per 33 bits of information by making his keyboard send only a key_down on the first keypress and a key_up on the second keypress (alternating).
That wouldn’t help; he can’t then choose to send “keyup (a)” followed by “keyup (a)”, there has to be a “key_down (a)” inbetween.
You seem to have read the text incorrectly. The passage you quote explicitly mentions sending both key_down and key_up and even uses the word ‘alternating’. ie. All that is changing is a relatively minor mechanical detail of what kind of button each key behaves as. If necessary, imagine that each key behaves something like the button on a retractable ball point pen. First press down. Second press up. All that is done is removing the need to actually hold each key down with a finger while they are in the down state.
I notice that I am confused. You say that I have read the original text incorrectly, and then you post a clarification that exactly matches my original interpretation of the text.
I see two possible causes for this. Either I have misunderstood you (as you state) and, moreover, continue to misunderstand you in the same way; or you have misunderstood me.
Therefore, I shall re-state my point in more detail, in the hope of clearing this up.
Consider the ‘a’ key. This point applies to all keys equally, of course, but for simplicity let us consider a single arbitrary key.
Under your proposed keyboard, the following is true.
The first time Clark presses ‘a’, the keyboard sends key_down (a). This is 11 bits, encoding the message ‘key “a” has been pressed’
The second time Clark presses ‘a’, the keyboard sends key_up (a). This is 22 bits, encoding the message ‘key “a” has been pressed’
The third time Clark presses ‘a’, the keyboard sends key_down (a). This is 11 bits, encoding the message ‘key “a” has been pressed’
The fourth time Clark presses ‘a’, the keyboard sends key_up (a). This is 22 bits, encoding the message ‘key “a” has been pressed’
I therefore note that replacing every key_up with a key_down saves a further 11 bits per 2 keystrokes, on average, for no loss of information.
I see two possible causes for this. Either I have misunderstood you (as you state) and, moreover, continue to misunderstand you in the same way; or you have misunderstood me.
Both is also a possibility (and from my re-analysis seems to be the most likely.)
Allow me to abandon inferences about interpretations and just respond to some words.
Given that Clark constructed his own hardware he could easily make use of the full 2*log2(number of keys) bits of information per 33 bits of information by making his keyboard send only a keydown on the first keypress and a keyup on the second keypress (alternating).
That wouldn’t help;
This claim is false. It would help a lot! It improves bandwidth by a factor of a little under two over not the alternative making optimal use of the key_up signal as well as the key_downs. As for how much improvement the keyboard change is over merely using all 10 fingers optimally… the math gets complicated and is dependent on things like finger length.
I therefore note that replacing every keyup with a keydown saves a further 11 bits per 2 keystrokes, on average, for no loss of information.
I agree. If just abandoning key_up scancodes altogether is permitted then obviously do so! I used them because from what little I understand of the PS/2-keyboard protocol from reading CCC’s introduction then a little additional research the key_ups are not optional and decided that leaving them out would violate CCC’s assumptions. I was incidentally rather shocked at the way the protocol worked. 22 bits for a key_up? Why? That’s a terrible way to do it! (Charity suggests to me that bandwidth must not have been an efficient target of optimisation resources at the design time.)
Given that Clark constructed his own hardware he could easily make use of the full 2*log2(number of keys) bits of information per 33 bits of information by making his keyboard send only a keydown on the first keypress and a keyup on the second keypress (alternating).
That wouldn’t help;
This claim is false.
Yes, you are right. On re-reading and looking over this again, I see that misread you there; for some reason (even after I knew that misreading was likely) I read that as 2log2(number of keys) bits of information per keypress* instead of per 33 bits of information.
I agree. If just abandoning key_up scancodes altogether is permitted then obviously do so! I used them because from what little I understand of the PS/2-keyboard protocol from reading CCC’s introduction then a little additional research the key_ups are not optional and decided that leaving them out would violate CCC’s assumptions.
Ah, right. My apologies; I’d though that the idea of drawing log2(number of keys) bits of information per keypress already implied a rejection of the assumption that the PS/2 protocol would be used.
I was incidentally rather shocked at the way the protocol worked. 22 bits for a key_up? Why? That’s a terrible way to do it! (Charity suggests to me that bandwidth must not have been an efficient target of optimisation resources at the design time.)
Well, to be fair, the PS/2 protocol is only intended to be able to keep up with normal human typing speed. The bandwidth limit sits at over 80 characters per second, and I don’t think that anyone outside the realm of fiction is likely to ever type at that speed, even just mashing keys randomly.
That wouldn’t help; he can’t then choose to send “key_up (a)” followed by “key_up (a)”, there has to be a “key_down (a)” inbetween.
He could, of course, simply elect to have his personal keyboard ignore key_ups and send only the shorter key_down codes, meaning that he has only 11 bits per character. Aside from that minor quibble, though, you make several excellent points.
If he’s writing his own keyboard driver, he can take this even further, and have his keyboard (when in speed mode) deliver a different set of scancodes; he can pick out 32 keys and have each of them deliver a different 5-byte code (hitting any key outside of those 32 automatically turns off speed mode). In this manner, his encoding efficiency is limited only by processing power (his system will have to decrypt the input stream pretty quickly) and clock rate (assuming he doesn’t mess with the desktop hardware, he’d probably still have to stick to 16.7kHz). Since modern processors run in the GHz range, I expect that the keyboard clock rate will be the limiting factor.
Unless he starts messing with his desktop’s hardware, of course.
You seem to have read the text incorrectly. The passage you quote explicitly mentions sending both key_down and key_up and even uses the word ‘alternating’. ie. All that is changing is a relatively minor mechanical detail of what kind of button each key behaves as. If necessary, imagine that each key behaves something like the button on a retractable ball point pen. First press down. Second press up. All that is done is removing the need to actually hold each key down with a finger while they are in the down state.
I notice that I am confused. You say that I have read the original text incorrectly, and then you post a clarification that exactly matches my original interpretation of the text.
I see two possible causes for this. Either I have misunderstood you (as you state) and, moreover, continue to misunderstand you in the same way; or you have misunderstood me.
Therefore, I shall re-state my point in more detail, in the hope of clearing this up.
Consider the ‘a’ key. This point applies to all keys equally, of course, but for simplicity let us consider a single arbitrary key.
Under your proposed keyboard, the following is true.
The first time Clark presses ‘a’, the keyboard sends key_down (a). This is 11 bits, encoding the message ‘key “a” has been pressed’
The second time Clark presses ‘a’, the keyboard sends key_up (a). This is 22 bits, encoding the message ‘key “a” has been pressed’
The third time Clark presses ‘a’, the keyboard sends key_down (a). This is 11 bits, encoding the message ‘key “a” has been pressed’
The fourth time Clark presses ‘a’, the keyboard sends key_up (a). This is 22 bits, encoding the message ‘key “a” has been pressed’
I therefore note that replacing every key_up with a key_down saves a further 11 bits per 2 keystrokes, on average, for no loss of information.
Both is also a possibility (and from my re-analysis seems to be the most likely.)
Allow me to abandon inferences about interpretations and just respond to some words.
This claim is false. It would help a lot! It improves bandwidth by a factor of a little under two over not the alternative making optimal use of the key_up signal as well as the key_downs. As for how much improvement the keyboard change is over merely using all 10 fingers optimally… the math gets complicated and is dependent on things like finger length.
I agree. If just abandoning key_up scancodes altogether is permitted then obviously do so! I used them because from what little I understand of the PS/2-keyboard protocol from reading CCC’s introduction then a little additional research the key_ups are not optional and decided that leaving them out would violate CCC’s assumptions. I was incidentally rather shocked at the way the protocol worked. 22 bits for a key_up? Why? That’s a terrible way to do it! (Charity suggests to me that bandwidth must not have been an efficient target of optimisation resources at the design time.)
Yes, you are right. On re-reading and looking over this again, I see that misread you there; for some reason (even after I knew that misreading was likely) I read that as 2log2(number of keys) bits of information per keypress* instead of per 33 bits of information.
Ah, right. My apologies; I’d though that the idea of drawing log2(number of keys) bits of information per keypress already implied a rejection of the assumption that the PS/2 protocol would be used.
Well, to be fair, the PS/2 protocol is only intended to be able to keep up with normal human typing speed. The bandwidth limit sits at over 80 characters per second, and I don’t think that anyone outside the realm of fiction is likely to ever type at that speed, even just mashing keys randomly.
.
Characters per second and words per minute don’t match; wpm is typically calculated with 5 characters per word, so 80 cps would correspond to 960 wpm.