The serial speed of a workstation would be limited, but with that much memory at your disposal you could have many workstations active at once.
You may not get a huge speedup developing individual software components, but for larger projects you’d be the ultimate software development “team”: The effective output of N programmers (where N is the number of separable components), with near-zero coordination overhead (basically, the cost of task-switching). In other words, you’d effectively bypass Brook’s law.
So why not build your own OS and compiler(s), heavily optimized for speed and parallelism? You could always fall back on study and writing while you’re waiting for 10,000 compiles to finish.
EDIT: You’d also have an enormous amount of time to search for highly modular designs that benefit maximally from your anti-Brookian development constraints.
I would hope to have been given programming interfaces that didn’t have a multi-fortnight compile time. I think a primitive assembly language and a way of specifying logic circuits would be easy to provide. Books about language and OS design would also be good—quite likely existing languages wouldn’t be optimal for nearly instant compiling. I’d look at Forth as a possible language rather than any I currently use. There should be workstations with graphics output controlled by programmable analogue circuits for once you work out how to program them efficiently.
Much would depend on how parallel compilers advance by the time such an AGI design becomes feasible.
Right now compilers are just barely multi-threaded, and so they have a very long way to go before reaching their maximum speed.
So the question really becomes what are the limits of compilation speed? If you had the fastest possible C++ compiler running on dozens of GPUs for example.
I’m not a compilation expert, but from what I remember many of the steps in compilation/linking are serial and much would need to be rethought. This would be a great deal of work.
There would still be minimum times just to load data to and from storage and transfer it around the network if the program is distributed.
And depending on what you are working on, there is the minimum debug/test cycle. For most complex real world engineering systems this is always going to be an end limiter.
The slowest phase in a nonoptimizing compiler is lexical scanning. (An optimizer can usefully absorb arbitrary amounts of effort, but most compiles don’t strictly need it.) For most languages, scanning can be done in a few cycles/byte. Scanning with finite automata can also be done in parallel in O(log(n)) time, though I don’t know of any compilers that do that. So, a system built for fast turnaround, using methods we know now (like good old Turbo Pascal), ought to be able to compile several lines/second given 1 kcycle/sec. Therefore you still want to recompile only small chunks and make linking cheap—in the limit there’s the old 8-bit Basics that essentially treated each line of the program as a compilation unit. See P. J. Brown’s old book, or Chuck Moore’s Color Forth.
Nitpick: C++ is the slowest-to-compile mainstream language, so it’s probably not the best example when discussing the ultimate limits of compilation speed. It heavily favors trading off compilation speed for abstractive power, which probably isn’t a good deal for a THz brain that can afford to spend more time coding (this is complicated by the fact that more code leads to more bugs leads to more iterations).
Yes, you’d probably need to throw out a lot of traditional compiler architecture and start from scratch. But I think this would be mostly conceptual, “hands-off” work, so divide “a great deal” of it by 10^6. At worst, I think it would be comparable to the level of effort required to master a field, so I don’t think it’s any less realistic than your scholarship hypothetical.
No silver bullet for network latency, this is definitely a ceiling on low-level parallel speedups. I don’t think it’s much of a problem for anti-Brookian scaling though, since the bottlenecks encountered by each “virtual developer” will be much slower than the network.
The debug/test cycle will certainly be an issue, but here also the economies are very different for a THz brain. For one thing, you can afford to test the living daylights out of individual components: Tests can be written at brain-speed, and designed to be independent of one another, meaning they can run in parallel. You’d want to specify component boundaries precisely enough that most of the development iteration would take place at the component level, but this is no large burden—design work runs at brain-speed.
I can’t shake the feeling that I’m just scratching the surface of viable development strategies in this scenario. Swapping the scarcity of CPU time and programmer time invalidates a lot of modern intuition concerning the economy of programming (at least, my brain is certainly groaning under the strain). Older intuitions based around slower computers aren’t much better, since they were also memory- and parallelism-constrained. Very thought-provoking post.
C++ is the slowest-to-compile mainstream language, so it’s probably not the best example when discussing the ultimate limits of compilation speed. It heavily favors trading off compilation speed for abstractive power, which probably isn’t a good deal for a THz brain that can afford to spend more time coding (this is complicated by the fact that more code leads to more bugs leads to more iterations).
This is especially the case given that THz brain does not particularly need to worry about industry acceptance or prevalence of libraries. C++ trades off compilation speed for abstractive power in an extremely inefficient and largely obsolete way. It isn’t even all that powerful in terms of possible abstractions. At least relative to a serious language.
C++ trades off compilation speed for abstractive power in an extremely inefficient and largely obsolete way. It isn’t even all that powerful in terms of possible abstractions. At least relative to a serious language.
What do you have in mind for a serious language? Something new?
I think the advantage of C++ for a hyperfast thinker (and by the way my analysis was of a GHZ brain, not a THZ brain—the latter is even wierder) is the execution speed. You certainly could and would probably want to test some things with a scripting language, but 1khz computers are really really slow, even when massively parallel. You would be wanting every last cycle.
One other interesting way to make money—you could absolutely clean house in computer trading. You could think at the same speed as the simple algorithmic traders but apply the massive intelligence of a cortex to predict and exploit them. This is a whole realm humans can not enter.
The serial speed of a workstation would be limited, but with that much memory at your disposal you could have many workstations active at once.
You may not get a huge speedup developing individual software components, but for larger projects you’d be the ultimate software development “team”: The effective output of N programmers (where N is the number of separable components), with near-zero coordination overhead (basically, the cost of task-switching). In other words, you’d effectively bypass Brook’s law.
So why not build your own OS and compiler(s), heavily optimized for speed and parallelism? You could always fall back on study and writing while you’re waiting for 10,000 compiles to finish.
EDIT: You’d also have an enormous amount of time to search for highly modular designs that benefit maximally from your anti-Brookian development constraints.
I would hope to have been given programming interfaces that didn’t have a multi-fortnight compile time. I think a primitive assembly language and a way of specifying logic circuits would be easy to provide. Books about language and OS design would also be good—quite likely existing languages wouldn’t be optimal for nearly instant compiling. I’d look at Forth as a possible language rather than any I currently use. There should be workstations with graphics output controlled by programmable analogue circuits for once you work out how to program them efficiently.
Yes these are some good points loqi.
Much would depend on how parallel compilers advance by the time such an AGI design becomes feasible.
Right now compilers are just barely multi-threaded, and so they have a very long way to go before reaching their maximum speed.
So the question really becomes what are the limits of compilation speed? If you had the fastest possible C++ compiler running on dozens of GPUs for example.
I’m not a compilation expert, but from what I remember many of the steps in compilation/linking are serial and much would need to be rethought. This would be a great deal of work.
There would still be minimum times just to load data to and from storage and transfer it around the network if the program is distributed.
And depending on what you are working on, there is the minimum debug/test cycle. For most complex real world engineering systems this is always going to be an end limiter.
The slowest phase in a nonoptimizing compiler is lexical scanning. (An optimizer can usefully absorb arbitrary amounts of effort, but most compiles don’t strictly need it.) For most languages, scanning can be done in a few cycles/byte. Scanning with finite automata can also be done in parallel in O(log(n)) time, though I don’t know of any compilers that do that. So, a system built for fast turnaround, using methods we know now (like good old Turbo Pascal), ought to be able to compile several lines/second given 1 kcycle/sec. Therefore you still want to recompile only small chunks and make linking cheap—in the limit there’s the old 8-bit Basics that essentially treated each line of the program as a compilation unit. See P. J. Brown’s old book, or Chuck Moore’s Color Forth.
Nitpick: C++ is the slowest-to-compile mainstream language, so it’s probably not the best example when discussing the ultimate limits of compilation speed. It heavily favors trading off compilation speed for abstractive power, which probably isn’t a good deal for a THz brain that can afford to spend more time coding (this is complicated by the fact that more code leads to more bugs leads to more iterations).
Yes, you’d probably need to throw out a lot of traditional compiler architecture and start from scratch. But I think this would be mostly conceptual, “hands-off” work, so divide “a great deal” of it by 10^6. At worst, I think it would be comparable to the level of effort required to master a field, so I don’t think it’s any less realistic than your scholarship hypothetical.
No silver bullet for network latency, this is definitely a ceiling on low-level parallel speedups. I don’t think it’s much of a problem for anti-Brookian scaling though, since the bottlenecks encountered by each “virtual developer” will be much slower than the network.
The debug/test cycle will certainly be an issue, but here also the economies are very different for a THz brain. For one thing, you can afford to test the living daylights out of individual components: Tests can be written at brain-speed, and designed to be independent of one another, meaning they can run in parallel. You’d want to specify component boundaries precisely enough that most of the development iteration would take place at the component level, but this is no large burden—design work runs at brain-speed.
I can’t shake the feeling that I’m just scratching the surface of viable development strategies in this scenario. Swapping the scarcity of CPU time and programmer time invalidates a lot of modern intuition concerning the economy of programming (at least, my brain is certainly groaning under the strain). Older intuitions based around slower computers aren’t much better, since they were also memory- and parallelism-constrained. Very thought-provoking post.
This is especially the case given that THz brain does not particularly need to worry about industry acceptance or prevalence of libraries. C++ trades off compilation speed for abstractive power in an extremely inefficient and largely obsolete way. It isn’t even all that powerful in terms of possible abstractions. At least relative to a serious language.
What do you have in mind for a serious language? Something new?
I think the advantage of C++ for a hyperfast thinker (and by the way my analysis was of a GHZ brain, not a THZ brain—the latter is even wierder) is the execution speed. You certainly could and would probably want to test some things with a scripting language, but 1khz computers are really really slow, even when massively parallel. You would be wanting every last cycle.
One other interesting way to make money—you could absolutely clean house in computer trading. You could think at the same speed as the simple algorithmic traders but apply the massive intelligence of a cortex to predict and exploit them. This is a whole realm humans can not enter.