The slowest phase in a nonoptimizing compiler is lexical scanning. (An optimizer can usefully absorb arbitrary amounts of effort, but most compiles don’t strictly need it.) For most languages, scanning can be done in a few cycles/byte. Scanning with finite automata can also be done in parallel in O(log(n)) time, though I don’t know of any compilers that do that. So, a system built for fast turnaround, using methods we know now (like good old Turbo Pascal), ought to be able to compile several lines/second given 1 kcycle/sec. Therefore you still want to recompile only small chunks and make linking cheap—in the limit there’s the old 8-bit Basics that essentially treated each line of the program as a compilation unit. See P. J. Brown’s old book, or Chuck Moore’s Color Forth.
The slowest phase in a nonoptimizing compiler is lexical scanning. (An optimizer can usefully absorb arbitrary amounts of effort, but most compiles don’t strictly need it.) For most languages, scanning can be done in a few cycles/byte. Scanning with finite automata can also be done in parallel in O(log(n)) time, though I don’t know of any compilers that do that. So, a system built for fast turnaround, using methods we know now (like good old Turbo Pascal), ought to be able to compile several lines/second given 1 kcycle/sec. Therefore you still want to recompile only small chunks and make linking cheap—in the limit there’s the old 8-bit Basics that essentially treated each line of the program as a compilation unit. See P. J. Brown’s old book, or Chuck Moore’s Color Forth.