Nitpick: C++ is the slowest-to-compile mainstream language, so it’s probably not the best example when discussing the ultimate limits of compilation speed. It heavily favors trading off compilation speed for abstractive power, which probably isn’t a good deal for a THz brain that can afford to spend more time coding (this is complicated by the fact that more code leads to more bugs leads to more iterations).
Yes, you’d probably need to throw out a lot of traditional compiler architecture and start from scratch. But I think this would be mostly conceptual, “hands-off” work, so divide “a great deal” of it by 10^6. At worst, I think it would be comparable to the level of effort required to master a field, so I don’t think it’s any less realistic than your scholarship hypothetical.
No silver bullet for network latency, this is definitely a ceiling on low-level parallel speedups. I don’t think it’s much of a problem for anti-Brookian scaling though, since the bottlenecks encountered by each “virtual developer” will be much slower than the network.
The debug/test cycle will certainly be an issue, but here also the economies are very different for a THz brain. For one thing, you can afford to test the living daylights out of individual components: Tests can be written at brain-speed, and designed to be independent of one another, meaning they can run in parallel. You’d want to specify component boundaries precisely enough that most of the development iteration would take place at the component level, but this is no large burden—design work runs at brain-speed.
I can’t shake the feeling that I’m just scratching the surface of viable development strategies in this scenario. Swapping the scarcity of CPU time and programmer time invalidates a lot of modern intuition concerning the economy of programming (at least, my brain is certainly groaning under the strain). Older intuitions based around slower computers aren’t much better, since they were also memory- and parallelism-constrained. Very thought-provoking post.
C++ is the slowest-to-compile mainstream language, so it’s probably not the best example when discussing the ultimate limits of compilation speed. It heavily favors trading off compilation speed for abstractive power, which probably isn’t a good deal for a THz brain that can afford to spend more time coding (this is complicated by the fact that more code leads to more bugs leads to more iterations).
This is especially the case given that THz brain does not particularly need to worry about industry acceptance or prevalence of libraries. C++ trades off compilation speed for abstractive power in an extremely inefficient and largely obsolete way. It isn’t even all that powerful in terms of possible abstractions. At least relative to a serious language.
C++ trades off compilation speed for abstractive power in an extremely inefficient and largely obsolete way. It isn’t even all that powerful in terms of possible abstractions. At least relative to a serious language.
What do you have in mind for a serious language? Something new?
I think the advantage of C++ for a hyperfast thinker (and by the way my analysis was of a GHZ brain, not a THZ brain—the latter is even wierder) is the execution speed. You certainly could and would probably want to test some things with a scripting language, but 1khz computers are really really slow, even when massively parallel. You would be wanting every last cycle.
One other interesting way to make money—you could absolutely clean house in computer trading. You could think at the same speed as the simple algorithmic traders but apply the massive intelligence of a cortex to predict and exploit them. This is a whole realm humans can not enter.
Nitpick: C++ is the slowest-to-compile mainstream language, so it’s probably not the best example when discussing the ultimate limits of compilation speed. It heavily favors trading off compilation speed for abstractive power, which probably isn’t a good deal for a THz brain that can afford to spend more time coding (this is complicated by the fact that more code leads to more bugs leads to more iterations).
Yes, you’d probably need to throw out a lot of traditional compiler architecture and start from scratch. But I think this would be mostly conceptual, “hands-off” work, so divide “a great deal” of it by 10^6. At worst, I think it would be comparable to the level of effort required to master a field, so I don’t think it’s any less realistic than your scholarship hypothetical.
No silver bullet for network latency, this is definitely a ceiling on low-level parallel speedups. I don’t think it’s much of a problem for anti-Brookian scaling though, since the bottlenecks encountered by each “virtual developer” will be much slower than the network.
The debug/test cycle will certainly be an issue, but here also the economies are very different for a THz brain. For one thing, you can afford to test the living daylights out of individual components: Tests can be written at brain-speed, and designed to be independent of one another, meaning they can run in parallel. You’d want to specify component boundaries precisely enough that most of the development iteration would take place at the component level, but this is no large burden—design work runs at brain-speed.
I can’t shake the feeling that I’m just scratching the surface of viable development strategies in this scenario. Swapping the scarcity of CPU time and programmer time invalidates a lot of modern intuition concerning the economy of programming (at least, my brain is certainly groaning under the strain). Older intuitions based around slower computers aren’t much better, since they were also memory- and parallelism-constrained. Very thought-provoking post.
This is especially the case given that THz brain does not particularly need to worry about industry acceptance or prevalence of libraries. C++ trades off compilation speed for abstractive power in an extremely inefficient and largely obsolete way. It isn’t even all that powerful in terms of possible abstractions. At least relative to a serious language.
What do you have in mind for a serious language? Something new?
I think the advantage of C++ for a hyperfast thinker (and by the way my analysis was of a GHZ brain, not a THZ brain—the latter is even wierder) is the execution speed. You certainly could and would probably want to test some things with a scripting language, but 1khz computers are really really slow, even when massively parallel. You would be wanting every last cycle.
One other interesting way to make money—you could absolutely clean house in computer trading. You could think at the same speed as the simple algorithmic traders but apply the massive intelligence of a cortex to predict and exploit them. This is a whole realm humans can not enter.