There are absolutely vital lies that everyone can and should believe, even knowing that they aren’t true or can not be true.
/Everyone/ today has their own personal army, including the parts of the army no one really likes, such as the iffy command structure and the sociopath that we’re desperately trying to Section Eight.
Systems that aim to optimize a goal /almost always/ instead optimize the pretense of the goal, followed by reproduction pressures, followed by the actual goal itself.
Political :
Network Neutrality desires a good thing, but the underlying rule structure necessary to implement it makes the task either fundamentally impossible or practically undesirable.
Privacy policies focused on preventing collection of identifiable data are ultimately doomed.
LessWrong-specific:
“Karma” is a terrible system for any site that lacks extreme monofocus. A point of Karma means the same thing on a top level post that breaks into new levels of philosophy, or a sufficiently entertaining pun. It might be the least bad system available, but in a community nearly defined by tech and data-analysis it’s disappointing.
The risks and costs of “Raising the sanity waterline” are heavily underinvestigated. We recognize that there is an individual valley of bad rationality, but haven’t really looked at what this would mean on a national scale. “Nuclear Winter” as argued by Sagan was a very, very overt Pascal’s Wager: this Very High Value event can be avoided, so much must avoid it at any cost. It /also/ certainly gave valuable political cover to anti-nuclear war folk, may have affected or effected Russian and US and Cuban nuclear policy, and could (although not necessarily would) be supported from a utilitarian perspective… several hundred pages of reading later.
“Rationality” is an overloaded word in the exact sort of ways that make it a terrible thing to turn into an identity. When you’re competing with RationalWiki, the universe is trying to give you a Hint.
The type of Atheism that is certain it will win, won’t. There’s a fascinating post describing how religion was driven from its controlling aspects in History, in Science, in Government, in Cleanliness … and then goes on to describe how religion /will/ be driven from such a place on matters of ethics. Do not question why, no matter your surprise, that religion remains on a pedestal for Ethics, no matter how much it’s poked and prodded by the blasphemy of actual practice. Lest you find the answer.
((I’m /also/ not convinced that Atheism is a good hill for improved rationality to spend its capital on, anymore than veganism is a good hill for improved ethics to spend its capital on. This may be opinion rather than right/wrong.))
MIRI-specific:
MIRI dramatically weakens its arguments by focusing on special-case scenarios because those special-case situations are personally appealing to a few of its sponsors. Recursively self-improving Singularity-style AI is very dangerous… and it’s several orders of complexity more difficult to describe that danger, where even minimally self-improving AI still have potential to be an existential risk and requires many fewer leaps to discuss and leads to similar concerns anyway.
MIRI’s difficulty providing a coherent argument to predisposed insiders for its value is more worrying than its difficulty working with outsiders or even its actual value. Note: that’s a value of “difficulty working with outsiders” that assumes over six-to-nine months to get the Sequences eBook proofread and into a norm-palatable format. ((And, yes, I realize that I could and should help with this problem instead of just complaining about it.))
Systems that aim to optimize a goal /almost always/ instead optimize the pretense of the goal, followed by reproduction pressures, followed by the actual goal itself.
It’s related. Goodhart’s Law says that using a measure for policy will decouple it from any pre-existing relationship with economic activity, but doesn’t predict how that decoupling will occur. The common story of Goodhart’s law tells us how the Soviet Union measured factory output in pounds of machinery, and got heavier but less efficient machinery. Formalizing the patterns tells us more about how this would change if, say, there had not been very strict and severe punishments for falsifying machinery weight production reports.
Sometimes this is a good thing : it’s why, for one example, companies don’t instantly implode into profit-maximizers just because we look at stock values (or at least take years to do so). But it does mean that following a good statistic well tends to cause worse outcomes that following a poor statistic weakly.
That said, while I’m convinced that’s the pattern, it’s not the only one or even the most obvious one, and most people seem to have different formalizations, and I can’t find the evidence to demonstrate it.
“believing X” and “knowing X is not true” cannot happen in the same head
This is known as doublethink. Its connotations are mostly negative, but Scott Fitzgerald did say that “The test of a first rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function”—a bon mot I find insightful.
(Basilisk Warning: may not be good information to read if you suffer depression or anxiety and do not want to separate beliefs from evidence.)
Having an internalized locus of control strongly correlates with a wide variety of psychological and physiological health benefits. There’s some evidence that this link is causative for at least some characteristics. It’s not a completely unblemished good characteristic—it correlates with lower compliance with medical orders, and probably isn’t good for some anxiety disorders in extreme cases—but it seems more helpful than not.
It’s also almost certainly a lie. Indeed, it’s obvious that such a thing can’t exist under any useful models of reality. There are mountains of evidence for either the nature or nurture side of the debate, to the point where we really hope that bad choices are caused by as external an event as possible because /that/, at least, we might be able fix.. At a more basic level, there’s a whole lot of universe that isn’t you than there is you to start with. On the upside, if your locus of control is external, at least it’s not worth worrying about. You couldn’t do much to change it, after all.
Psychology has a few other traits where this sort of thing pops up, most hilariously during placebo studies, though that’s perhaps too easy an example. It’s not the only one, though : useful lies are core to a lot of current solutions to social problems, all the way down to using normal decision theory to cooperate in an iterated prisoner’s dilemma.
It’s possible (even plausible) that this represents a valley of rationality—like the earlier example of Pascal’s Wagers that hold decent Utilitarian tradeoffs underneath -- but I’m not sure falsifiable, and it’s certainly not obvious right now.
Basilisk Warning: may not be good information to read if you suffer depression or anxiety and do not want to separate beliefs from evidence.
As an afflicted individual, I appreciate the content warning. I’m responding without having read the rest of the comment. This is a note of gratitude to you, and a data point that for yourself and others that such content warnings are appreciated.
I second Evan that the warning was a good idea, but I do wonder whether it would be better to just say “content warning”; “Basilisk” sounds culty, might point confused people towards dangerous or distressing ideas, and is a word which we should probably be not using more than necessary around here for the simple PR reason of not looking like idiots.
Yeah, other terminology is probably a better idea. I’d avoided ‘trigger’ because it isn’t likely to actually trigger anything, but there’s no reason to use new terms when perfectly good existing ones are available. Content warning isn’t quite right, but it’s close enough and enough people are unaware of the original meaning, that its probably preferable to use.
It’s possibly to use particle models or wave models to make predictions about photons, but believing a photon is both of those things is a separate matter, and is neither useful nor true—a photon is actually neither.
Truth is not beauty, so there’s no contradiction there, and even the impression of one disappears if the statements are made less poetic and oversimplified.
MIRI’s difficulty providing a coherent argument to predisposed insiders for its value is more worrying than its difficulty working with outsiders or even its actual value. Note: that’s a value of “difficulty working with outsiders” that assumes over six-to-nine months to get the Sequences eBook proofread and into a norm-palatable format. ((And, yes, I realize that I could and should help with this problem instead of just complaining about it.))
I agree, and it’s something I could, maybe should, help with instead of just complaining about. What’s stopping you from doing this? If you know someone else was actively doing the same, and could keep you committed to the goal in some way, would that help? If that didn’t work, then, what would be stopping us?
What’s stopping you from doing this? If you know someone else was actively doing the same, and could keep you committed to the goal in some way, would that help? If that didn’t work, then, what would be stopping us?
In organized form, I’ve joining the Youtopia page, and the current efforts appear to be either busywork or best completed by a native speaker of a different language, there’s no obvious organization regarding generalized goals, and no news updates at all. I’m not sure if this is because MIRI is using a different format to organize volunteers, because MIRI doesn’t promote the Youtopia group that seriously, because MIRI doesn’t have any current long-term projects that can be easily presented to volunteers, or for some other reason.
For individual-oriented work, I’m not sure what to do, and I’m not confident the best person to do it. There are also three separate issues, of which there’s not obvious interrelation. Improving the Sequences and accessibility of the Sequences is the most immediate and obvious thing, and I can think of a couple different ways to go about this :
The obvious first step is to make /any/ eBook, which is why a number of people have done just that. This isn’t much more comprehensible than just linking to the Sequences page on the Wiki, and in some cases may be less useful, and most of the other projects seem better-designed than I can offer.
Improve indexing of the Sequences for online access. This does seem like low-hanging fruit, possibly because people are waiting for a canonical order, and the current ordering is terrible. However, I don’t think it’s a good idea to just randomly edit the Sequences Wiki page, and Discussion and Main aren’t really well-formatted for a long-term version-heavy discussion. (And it seems not Wise for my first Discussion or Main post to be “shake up the local textbook!”) I have started working on a dependency web, but this effort doesn’t seem produce marginal benefits until large sections are completed.
The Sequences themselves are written as short bite-sized pieces for a generalized audience in a specific context, which may not be optimal for long-form reading in a general context. In some cases, components that were good-enough to start with now have clearer explanations… that have circular redundancies. Writing bridge pieces to cover these attributes, or writing alternative descriptions for the more insider-centric Sequences, works within existing structures, and providing benefit at fairly small intervals. This requires fairly deep understanding of the Sequences, and does not appear to be a low-hanging fruit. (And again, not necessarily Wise for my first Discussion or Main post to be “shake up the local textbook!”)
But this is separate from MIRI’s ability to work with insiders and only marginally associated with its ability to work with outsiders. There are folk with very significant comparative advantages (ie, anyone inside MIRI, anyone in California, most people who accept their axioms) on these matters, and while outsiders have managed to have major impact despite that, they were LukeProg with a low-hanging fruit of basic nonprofit organization, which is a pretty high bar to match.
There are some possibilities—translating prominent posts to remove excessive jargon or wordiness (or even Upgoer Fiving them), working on some reputation problems—but none of these seem to have obvious solutions, and wrong efforts could even have negative impact. See, for example, a lot of coverage in more mainstream web media. I’ve also got a significant anti-academic streak, so it’s a little hard for me to understand the specific concern that Scott Alexander/su3su2u1 were raising, which may complicate matters further.
over six-to-nine months to get the Sequences eBook proofread
This is one of the things that keep me puzzled. How can proofreading a book by a group of volunteers take more time than translating the whole book by a single person?
Is it because people don’t volunteer enough for the work because proofreading seems low status? Is it a bystander effect, where everyone assumes that someone else is already working on it? Are all people just reading LW for fun, but unwilling to do any real work to help? Is it a communication problem, where MIRI has a lack of volunteers, but the potential volunteers are not aware of it?
Just print the whole fucking thing on paper, each chapter separately. Bring the papers to a LW meetup, and ask people to spend 30 minutes proofreading some chapter. Assuming many of them haven’t read the whole Sequences, they can just pick a chapter they haven’t read yet, and just read it, while marking the found errors on the paper. Put a signature at the end of the chapter, so it is known how many people have seen it.
I used to work as a proofreader for MIRI, and was sometimes given documents with volunteers’ comments to help me out. In most cases, the quality of the comments was poor enough that in the time it took me to review the comments, decide which ones were valid, and apply the changes, I could have just read the whole thing and caught the same errors (or at least an equivalent number thereof) myself.
There’s also the fact that many errors are only such because they’re inconsistent with the overall style. It’s presumably not practical to get all your volunteers to read the Chicago Manual of Style and agree on what gets a hyphen and such before doing anything.
How can proofreading a book by a group of volunteers take more time than translating the whole book by a single person?
It’s the ‘norm-palatable’ part more than the proofreading aspect, unfortunately, and I’m not sure that can be readily made volunteer work
As far as I can tell, the proofreading part began in late 2013, and involved over two thousand pages of content to proofread through Youtopia. As far as I can tell, the only Sequence-related volunteer work on the Youtopia site involves translation into non-English languages, so the public volunteer proofreading is done and likely has been done for a while (wild guess, probably somewhere in mid-summer 2014?). MIRI is likely focusing on layout and similar publishing-level issues, and as far as I’ve been able to tell, they’re looking for a release at the end of the year that strongly suggests that they’ve finished the proofreading aspect.
That said, I may have outdated information: the Sequence eBook has been renamed several times in progress for a variety of good reasons, and I’m not sure Youtopia is the current place most of this is going on, and AlexVermeer may or may not be lead on this project and may or not be more active elsewhere than these forums. There are some public project attempts to make an eReader-compatible version, though these don’t seem much stronger from a reading order perspective.
In fairness, doing /good/ layout and ePublishing does take more specialized skills and some significant time, and MIRI may be rewriting portions of the work to better handle the limitations of a book format—where links are less powerful tools, where a large portion of viewer devices support only grayscale, and where certain media presentation formats aren’t possible. At least from what I’ve seen in technical writing and pen-and-paper RPGs, this is not a helpfully parallel task: everyone needs must use the same toolset and design rules, or all of their work is wasted. There was also a large amount of internal MIRI rewriting involved, as even the early version made available to volunteer proofreaders was significantly edited.
Less charitably, while trying to find this information I’ve found references to an eBook project dating back to late 2012, so nine months may be a low-end estimate. Not sure if that’s the same project or if it’s a different one that failed, or if it’s a different one that succeeded and I just can’t find the actual eBook result.
Bring the papers to a LW meetup, and ask people to spend 30 minutes proofreading some chapter. Assuming many of them haven’t read the whole Sequences, they can just pick a chapter they haven’t read yet, and just read it, while marking the found errors on the paper. Put a signature at the end of the chapter, so it is known how many people have seen it.
Thanks for the suggestion. I’ll plan some meetups around this. Not the whole thing, mind you. I’ll just get anyone willing at the weekly Vancouver meetup to do exactly that: take a mild amount of time reviewing a chapter/post, and providing feedback on it or whatever.
General :
There are absolutely vital lies that everyone can and should believe, even knowing that they aren’t true or can not be true.
/Everyone/ today has their own personal army, including the parts of the army no one really likes, such as the iffy command structure and the sociopath that we’re desperately trying to Section Eight.
Systems that aim to optimize a goal /almost always/ instead optimize the pretense of the goal, followed by reproduction pressures, followed by the actual goal itself.
Political :
Network Neutrality desires a good thing, but the underlying rule structure necessary to implement it makes the task either fundamentally impossible or practically undesirable.
Privacy policies focused on preventing collection of identifiable data are ultimately doomed.
LessWrong-specific:
“Karma” is a terrible system for any site that lacks extreme monofocus. A point of Karma means the same thing on a top level post that breaks into new levels of philosophy, or a sufficiently entertaining pun. It might be the least bad system available, but in a community nearly defined by tech and data-analysis it’s disappointing.
The risks and costs of “Raising the sanity waterline” are heavily underinvestigated. We recognize that there is an individual valley of bad rationality, but haven’t really looked at what this would mean on a national scale. “Nuclear Winter” as argued by Sagan was a very, very overt Pascal’s Wager: this Very High Value event can be avoided, so much must avoid it at any cost. It /also/ certainly gave valuable political cover to anti-nuclear war folk, may have affected or effected Russian and US and Cuban nuclear policy, and could (although not necessarily would) be supported from a utilitarian perspective… several hundred pages of reading later.
“Rationality” is an overloaded word in the exact sort of ways that make it a terrible thing to turn into an identity. When you’re competing with RationalWiki, the universe is trying to give you a Hint.
The type of Atheism that is certain it will win, won’t. There’s a fascinating post describing how religion was driven from its controlling aspects in History, in Science, in Government, in Cleanliness … and then goes on to describe how religion /will/ be driven from such a place on matters of ethics. Do not question why, no matter your surprise, that religion remains on a pedestal for Ethics, no matter how much it’s poked and prodded by the blasphemy of actual practice. Lest you find the answer.
((I’m /also/ not convinced that Atheism is a good hill for improved rationality to spend its capital on, anymore than veganism is a good hill for improved ethics to spend its capital on. This may be opinion rather than right/wrong.))
MIRI-specific:
MIRI dramatically weakens its arguments by focusing on special-case scenarios because those special-case situations are personally appealing to a few of its sponsors. Recursively self-improving Singularity-style AI is very dangerous… and it’s several orders of complexity more difficult to describe that danger, where even minimally self-improving AI still have potential to be an existential risk and requires many fewer leaps to discuss and leads to similar concerns anyway.
MIRI’s difficulty providing a coherent argument to predisposed insiders for its value is more worrying than its difficulty working with outsiders or even its actual value. Note: that’s a value of “difficulty working with outsiders” that assumes over six-to-nine months to get the Sequences eBook proofread and into a norm-palatable format. ((And, yes, I realize that I could and should help with this problem instead of just complaining about it.))
Isn’t this basically Goodhart’s law?
It’s related. Goodhart’s Law says that using a measure for policy will decouple it from any pre-existing relationship with economic activity, but doesn’t predict how that decoupling will occur. The common story of Goodhart’s law tells us how the Soviet Union measured factory output in pounds of machinery, and got heavier but less efficient machinery. Formalizing the patterns tells us more about how this would change if, say, there had not been very strict and severe punishments for falsifying machinery weight production reports.
Sometimes this is a good thing : it’s why, for one example, companies don’t instantly implode into profit-maximizers just because we look at stock values (or at least take years to do so). But it does mean that following a good statistic well tends to cause worse outcomes that following a poor statistic weakly.
That said, while I’m convinced that’s the pattern, it’s not the only one or even the most obvious one, and most people seem to have different formalizations, and I can’t find the evidence to demonstrate it.
Desirability issues aside, “believing X” and “knowing X is not true” cannot happen in the same head.
This is known as doublethink. Its connotations are mostly negative, but Scott Fitzgerald did say that “The test of a first rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function”—a bon mot I find insightful.
Example of that being useful?
(Basilisk Warning: may not be good information to read if you suffer depression or anxiety and do not want to separate beliefs from evidence.)
Having an internalized locus of control strongly correlates with a wide variety of psychological and physiological health benefits. There’s some evidence that this link is causative for at least some characteristics. It’s not a completely unblemished good characteristic—it correlates with lower compliance with medical orders, and probably isn’t good for some anxiety disorders in extreme cases—but it seems more helpful than not.
It’s also almost certainly a lie. Indeed, it’s obvious that such a thing can’t exist under any useful models of reality. There are mountains of evidence for either the nature or nurture side of the debate, to the point where we really hope that bad choices are caused by as external an event as possible because /that/, at least, we might be able fix.. At a more basic level, there’s a whole lot of universe that isn’t you than there is you to start with. On the upside, if your locus of control is external, at least it’s not worth worrying about. You couldn’t do much to change it, after all.
Psychology has a few other traits where this sort of thing pops up, most hilariously during placebo studies, though that’s perhaps too easy an example. It’s not the only one, though : useful lies are core to a lot of current solutions to social problems, all the way down to using normal decision theory to cooperate in an iterated prisoner’s dilemma.
It’s possible (even plausible) that this represents a valley of rationality—like the earlier example of Pascal’s Wagers that hold decent Utilitarian tradeoffs underneath -- but I’m not sure falsifiable, and it’s certainly not obvious right now.
As an afflicted individual, I appreciate the content warning. I’m responding without having read the rest of the comment. This is a note of gratitude to you, and a data point that for yourself and others that such content warnings are appreciated.
I second Evan that the warning was a good idea, but I do wonder whether it would be better to just say “content warning”; “Basilisk” sounds culty, might point confused people towards dangerous or distressing ideas, and is a word which we should probably be not using more than necessary around here for the simple PR reason of not looking like idiots.
Yeah, other terminology is probably a better idea. I’d avoided ‘trigger’ because it isn’t likely to actually trigger anything, but there’s no reason to use new terms when perfectly good existing ones are available. Content warning isn’t quite right, but it’s close enough and enough people are unaware of the original meaning, that its probably preferable to use.
Mostly in the analysis of complex phenomena with multiple in(or barely)compatible frameworks of looking at them.
A photon is a wave.
A photon is a particle.
Love is temporary insanity.
Love is the most beautiful feeling you can have.
Etc., etc.
It’s possibly to use particle models or wave models to make predictions about photons, but believing a photon is both of those things is a separate matter, and is neither useful nor true—a photon is actually neither.
Truth is not beauty, so there’s no contradiction there, and even the impression of one disappears if the statements are made less poetic and oversimplified.
I agree, and it’s something I could, maybe should, help with instead of just complaining about. What’s stopping you from doing this? If you know someone else was actively doing the same, and could keep you committed to the goal in some way, would that help? If that didn’t work, then, what would be stopping us?
In organized form, I’ve joining the Youtopia page, and the current efforts appear to be either busywork or best completed by a native speaker of a different language, there’s no obvious organization regarding generalized goals, and no news updates at all. I’m not sure if this is because MIRI is using a different format to organize volunteers, because MIRI doesn’t promote the Youtopia group that seriously, because MIRI doesn’t have any current long-term projects that can be easily presented to volunteers, or for some other reason.
For individual-oriented work, I’m not sure what to do, and I’m not confident the best person to do it. There are also three separate issues, of which there’s not obvious interrelation. Improving the Sequences and accessibility of the Sequences is the most immediate and obvious thing, and I can think of a couple different ways to go about this :
The obvious first step is to make /any/ eBook, which is why a number of people have done just that. This isn’t much more comprehensible than just linking to the Sequences page on the Wiki, and in some cases may be less useful, and most of the other projects seem better-designed than I can offer.
Improve indexing of the Sequences for online access. This does seem like low-hanging fruit, possibly because people are waiting for a canonical order, and the current ordering is terrible. However, I don’t think it’s a good idea to just randomly edit the Sequences Wiki page, and Discussion and Main aren’t really well-formatted for a long-term version-heavy discussion. (And it seems not Wise for my first Discussion or Main post to be “shake up the local textbook!”) I have started working on a dependency web, but this effort doesn’t seem produce marginal benefits until large sections are completed.
The Sequences themselves are written as short bite-sized pieces for a generalized audience in a specific context, which may not be optimal for long-form reading in a general context. In some cases, components that were good-enough to start with now have clearer explanations… that have circular redundancies. Writing bridge pieces to cover these attributes, or writing alternative descriptions for the more insider-centric Sequences, works within existing structures, and providing benefit at fairly small intervals. This requires fairly deep understanding of the Sequences, and does not appear to be a low-hanging fruit. (And again, not necessarily Wise for my first Discussion or Main post to be “shake up the local textbook!”)
But this is separate from MIRI’s ability to work with insiders and only marginally associated with its ability to work with outsiders. There are folk with very significant comparative advantages (ie, anyone inside MIRI, anyone in California, most people who accept their axioms) on these matters, and while outsiders have managed to have major impact despite that, they were LukeProg with a low-hanging fruit of basic nonprofit organization, which is a pretty high bar to match.
There are some possibilities—translating prominent posts to remove excessive jargon or wordiness (or even Upgoer Fiving them), working on some reputation problems—but none of these seem to have obvious solutions, and wrong efforts could even have negative impact. See, for example, a lot of coverage in more mainstream web media. I’ve also got a significant anti-academic streak, so it’s a little hard for me to understand the specific concern that Scott Alexander/su3su2u1 were raising, which may complicate matters further.
This is one of the things that keep me puzzled. How can proofreading a book by a group of volunteers take more time than translating the whole book by a single person?
Is it because people don’t volunteer enough for the work because proofreading seems low status? Is it a bystander effect, where everyone assumes that someone else is already working on it? Are all people just reading LW for fun, but unwilling to do any real work to help? Is it a communication problem, where MIRI has a lack of volunteers, but the potential volunteers are not aware of it?
Just print the whole fucking thing on paper, each chapter separately. Bring the papers to a LW meetup, and ask people to spend 30 minutes proofreading some chapter. Assuming many of them haven’t read the whole Sequences, they can just pick a chapter they haven’t read yet, and just read it, while marking the found errors on the paper. Put a signature at the end of the chapter, so it is known how many people have seen it.
I used to work as a proofreader for MIRI, and was sometimes given documents with volunteers’ comments to help me out. In most cases, the quality of the comments was poor enough that in the time it took me to review the comments, decide which ones were valid, and apply the changes, I could have just read the whole thing and caught the same errors (or at least an equivalent number thereof) myself.
There’s also the fact that many errors are only such because they’re inconsistent with the overall style. It’s presumably not practical to get all your volunteers to read the Chicago Manual of Style and agree on what gets a hyphen and such before doing anything.
I’m just reading LW for fun and unwilling to do any real work to help, FWIW.
It’s the ‘norm-palatable’ part more than the proofreading aspect, unfortunately, and I’m not sure that can be readily made volunteer work
As far as I can tell, the proofreading part began in late 2013, and involved over two thousand pages of content to proofread through Youtopia. As far as I can tell, the only Sequence-related volunteer work on the Youtopia site involves translation into non-English languages, so the public volunteer proofreading is done and likely has been done for a while (wild guess, probably somewhere in mid-summer 2014?). MIRI is likely focusing on layout and similar publishing-level issues, and as far as I’ve been able to tell, they’re looking for a release at the end of the year that strongly suggests that they’ve finished the proofreading aspect.
That said, I may have outdated information: the Sequence eBook has been renamed several times in progress for a variety of good reasons, and I’m not sure Youtopia is the current place most of this is going on, and AlexVermeer may or may not be lead on this project and may or not be more active elsewhere than these forums. There are some public project attempts to make an eReader-compatible version, though these don’t seem much stronger from a reading order perspective.
In fairness, doing /good/ layout and ePublishing does take more specialized skills and some significant time, and MIRI may be rewriting portions of the work to better handle the limitations of a book format—where links are less powerful tools, where a large portion of viewer devices support only grayscale, and where certain media presentation formats aren’t possible. At least from what I’ve seen in technical writing and pen-and-paper RPGs, this is not a helpfully parallel task: everyone needs must use the same toolset and design rules, or all of their work is wasted. There was also a large amount of internal MIRI rewriting involved, as even the early version made available to volunteer proofreaders was significantly edited.
Less charitably, while trying to find this information I’ve found references to an eBook project dating back to late 2012, so nine months may be a low-end estimate. Not sure if that’s the same project or if it’s a different one that failed, or if it’s a different one that succeeded and I just can’t find the actual eBook result.
Thanks for the suggestion. I’ll plan some meetups around this. Not the whole thing, mind you. I’ll just get anyone willing at the weekly Vancouver meetup to do exactly that: take a mild amount of time reviewing a chapter/post, and providing feedback on it or whatever.