I’d test one chapter (or one sub-chapter) at a time. Reader-reported assessments of quality and ease of understanding would be the obvious thing to measure for any instructional book… but also, isn’t CFAR trying to come up with various “rationality tests”? Accuracy of responses to test questions would be a great metric to try and optimize, and most instructional material benefits from having end-of-chapter questions anyway if only to help readers verify that they’ve gained actual understanding and not just an illusion thereof.
You can embed arbitrary javascript in PDFs, so what about including “phone-home” text-boxes for marginalia in rather the same way that online editions of Real World Haskell and other programming books have comment threads for each paragraph? I’d think other metrics like time-on-page could be measured as well. This would need to be disclosed, of course, and not every reader can be expected to comment, but the “how” seems tractable at first impression.
I don’t have any useful insights on what response to measure.
This doesn’t seem usefully true. Some googling for search queries like ‘a/b testing PDFs’ or ‘PDF phone home’ show no one discussing the topic of A/B testing different versions of PDFs, and Wikipedia indicates that only Adobe supports JS and even it produces a popup when you try to phonehome. So any A/B test is going to work on only a fraction of users (how many LWers still use Adobe Acrobat to read PDFs?), and it will alarm the ones it does work on (‘is this PDF spyware‽’).
You can embed arbitrary javascript in PDFs, so what about including “phone-home” text-boxes for marginalia
I don’t know how the security model works in various PDF readers, but wouldn’t the javascript code be sandboxed, hopefully? Sane security practices shouldn’t allow arbitrary code in PDFs to talk to random ’net addresses...
If the PDF is signed by a certificate the user has manually installed, it can embed what Adobe calls “high privilege” javascript, which includes the ability to launch any URL. That’s an extra step, which would discourage some users, but on the plus side it addresses the “who’s given informed consent?” problem.
Momentarily donning a slightly darker hat: it is also possible for a PDF to launch an arbitrary executable (see pp. 30-34 of Julia Wolf’s OMG WTF PDF, video). AIUI this requires no additional privileges.
...a certificate the user has manually installed …an extra step, which would discourage some users
My estimate for the value of that “some” is 95%+
Not to mention that most of the people who can be easily persuaded to manually install a cert on their PC probably already have a dozen toolbars in their browser… :-D
That’s not necessarily a bad idea, but how do you A/B test different versions of a PDF? What is the response that is being measured?
I’d test one chapter (or one sub-chapter) at a time. Reader-reported assessments of quality and ease of understanding would be the obvious thing to measure for any instructional book… but also, isn’t CFAR trying to come up with various “rationality tests”? Accuracy of responses to test questions would be a great metric to try and optimize, and most instructional material benefits from having end-of-chapter questions anyway if only to help readers verify that they’ve gained actual understanding and not just an illusion thereof.
You can embed arbitrary javascript in PDFs, so what about including “phone-home” text-boxes for marginalia in rather the same way that online editions of Real World Haskell and other programming books have comment threads for each paragraph? I’d think other metrics like time-on-page could be measured as well. This would need to be disclosed, of course, and not every reader can be expected to comment, but the “how” seems tractable at first impression.
I don’t have any useful insights on what response to measure.
This doesn’t seem usefully true. Some googling for search queries like ‘a/b testing PDFs’ or ‘PDF phone home’ show no one discussing the topic of A/B testing different versions of PDFs, and Wikipedia indicates that only Adobe supports JS and even it produces a popup when you try to phonehome. So any A/B test is going to work on only a fraction of users (how many LWers still use Adobe Acrobat to read PDFs?), and it will alarm the ones it does work on (‘is this PDF spyware‽’).
Foxit Reader supports javascript, and libpoppler (which powers evince and okular, among others) does as well.
Without something to measure, though, that’s really just a technical curiosity.
I don’t know how the security model works in various PDF readers, but wouldn’t the javascript code be sandboxed, hopefully? Sane security practices shouldn’t allow arbitrary code in PDFs to talk to random ’net addresses...
If the PDF is signed by a certificate the user has manually installed, it can embed what Adobe calls “high privilege” javascript, which includes the ability to launch any URL. That’s an extra step, which would discourage some users, but on the plus side it addresses the “who’s given informed consent?” problem.
Momentarily donning a slightly darker hat: it is also possible for a PDF to launch an arbitrary executable (see pp. 30-34 of Julia Wolf’s OMG WTF PDF, video). AIUI this requires no additional privileges.
My estimate for the value of that “some” is 95%+
Not to mention that most of the people who can be easily persuaded to manually install a cert on their PC probably already have a dozen toolbars in their browser… :-D