Bugs like this could be found in LaTeX source using regular expressions.
If a thing like this happens, it probably happens more than once in a long text. So when humans find a first example, computer could detect all remaining examples of the same pattern. (I don’t recommend automatical fixing of these bugs, only reporting them.)
Sure, and any good (human) editor should have a macro package for working in such a way. I’m saying that LaTeX just makes it much harder to work like that, effectively.
Sure, and any good (human) editor should have a macro package for working in such a way. I’m saying that LaTeX just makes it much harder to work like that, effectively.
It sure does. LaTeX is kind of a ridiculous hack. The semantics aren’t even consistent. I kind of wish there was a mature publishing system based of DRYML.
Unfortunately any competing system has a nearly insurmountable barrier to entry.
The best approach would be to build something completely backwards compatible. That is, it allows easy embedding of LaTeX code and optionally compiles out to .tex.
I think I disagree, but it would depend on implementation details.
One possibility I can see is that you keep around a copy of the LaTeX distribution to parse these easily embedded LaTeX fragments, something like LuaTeX might someday turn out to be. In that case, you’re still stuck supporting LaTeX’s monstrosity of a toolchain. In that case, there’s still e.g. no LaTeX on the iPad.
Another possibility is that you rewrite the LaTeX engine … ah, nevermind, this isn’t a possibility.
Congratulations. You’ve just triggered a false positive on almost every minus sign in existence. (e.g., $1 − 1 = 0$.)
Yes, but in each false positive all it does is print a message. Since there are rather few instances of minus signs compared to intended em dashes this doesn’t seem like much a problem. Ignoring the irrelevant messages also doesn’t introduce more than a trivial amount of work. Given that all equations need to be converted to the math environment (probably manually) and the time it takes a human to do the conversion (even when it just means adding $ around them) is orders of magnitude greater than the time taken to not do anything while reading that particular message we can merrily ignore the false positive issue as not worthy of optimisation.
I would love it if what you suggest were possible, but it just isn’t.
It’s almost exactly what I will do. It would be difficult to make a utility that got everything perfectly right every time without human intervention—that requires implementing comprehension skills and common sense. However, it is trivial to get something that does it well enough for our purposes with only minimal human intervention required.
Bugs like this could be found in LaTeX source using regular expressions.
If a thing like this happens, it probably happens more than once in a long text. So when humans find a first example, computer could detect all remaining examples of the same pattern. (I don’t recommend automatical fixing of these bugs, only reporting them.)
Oh dear. Attempting to parse LaTeX with regexes is only slightly more insane than attempting to parse HTML with regexes.
On the other hand, an interactive regexp search-and-replace is quite reasonable. Any good text editor should support such functionality...
Sure, and any good (human) editor should have a macro package for working in such a way. I’m saying that LaTeX just makes it much harder to work like that, effectively.
It sure does. LaTeX is kind of a ridiculous hack. The semantics aren’t even consistent. I kind of wish there was a mature publishing system based of DRYML.
You and I both. Unfortunately any competing system has a nearly insurmountable barrier to entry. Kind of “Worse is Better” taken to insane extremes.
The best approach would be to build something completely backwards compatible. That is, it allows easy embedding of LaTeX code and optionally compiles out to .tex.
I think I disagree, but it would depend on implementation details.
One possibility I can see is that you keep around a copy of the LaTeX distribution to parse these easily embedded LaTeX fragments, something like LuaTeX might someday turn out to be. In that case, you’re still stuck supporting LaTeX’s monstrosity of a toolchain. In that case, there’s still e.g. no LaTeX on the iPad.
Another possibility is that you rewrite the LaTeX engine … ah, nevermind, this isn’t a possibility.
Well, I volunteer to try.
if ($text =~ m/\s+-\s+/) print “Hyphen in place of a dash.\n”;
If this line could find dozen bugs, it’s worth using. Even if it won’t find all instances.
Congratulations. You’ve just triggered a false positive on almost every minus sign in existence. (e.g., $1 − 1 = 0$.)
I would love it if what you suggest were possible, but it just isn’t. Not when packages feel free to roll their own DSLs for anything.
Yes, but in each false positive all it does is print a message. Since there are rather few instances of minus signs compared to intended em dashes this doesn’t seem like much a problem. Ignoring the irrelevant messages also doesn’t introduce more than a trivial amount of work. Given that all equations need to be converted to the math environment (probably manually) and the time it takes a human to do the conversion (even when it just means adding $ around them) is orders of magnitude greater than the time taken to not do anything while reading that particular message we can merrily ignore the false positive issue as not worthy of optimisation.
It’s almost exactly what I will do. It would be difficult to make a utility that got everything perfectly right every time without human intervention—that requires implementing comprehension skills and common sense. However, it is trivial to get something that does it well enough for our purposes with only minimal human intervention required.
See this comment.
Every minus sign in the Sequences? What, all three of them?