You could think about them, but you could not actually load programs, compile code, or debug and run programs any faster than a human.
Thinking about an algorithm and taking the time to actually write it is typically the overwhelming bottleneck in algorithm research. I do scientific computing, which I think is on the extreme end of having run time be a substantial portion of total development time. Some of my simulations take days/weeks to finish. Even in that case I would wager CPU time is only 1% of my total development time.
Moreover, if I were willing to sacrifice non-CPU time, I could probably parallelize a lot of my CPU time. Instead of testing 1 idea at a time I could enumerate 1,000 ideas and then test them all simultaneously.
I agree that a 1,000,000 times accelerated human would not be as powerful as a 1,000,000 times accelerated human with a 1,000,000 times accelerated computer, but I suspect the accelerated human would still get thousands of times as much work done.
I do scientific computing, which I think is on the extreme end of having run time be a substantial portion of total development time. Some of my simulations take days/weeks to finish. Even in that case I would wager CPU time is only 1% of my total development time.
I’m not sure how you can thus justify the 1% number. If you’ve had a single simulation take a week to finish, then you have already used up 1% of your development time for 2 years.
I work in video games, specifically via cloud computing, and waiting on various computer tasks can easily take up 10%-30% of my time per day.
But even using your 1% number, the speedup would be 100x, not 1,000x.
Yes a massively accelerated human would still be able to do research and engineering faster than humans.
My larger point was that the effective speedup across the space of tasks/fields is highly uneven and a superfast thinker would get the most utility out of low-computational occupations that involve abstract thinking, such as writing.
If you’ve had a single simulation take a week to finish, then you have already used up 1% of your development time for 2 years.
Good point. I tend to discount long chunks of CPU time because I usually overlap them with personal mini-vacations =P. Thinking about it in more detail I probably spend about 10% of my time waiting for simulations to finish.
But even using your 1% number, the speedup would be 100x, not 1,000x.
Disregarding doing multiple tests/simulations/compilations at once.
I work in video games, specifically via cloud computing, and waiting on various computer tasks can easily take up 10%-30% of my time per day.
I do game dev on the side. The projects are much larger than my scientific projects and allow for much more room for parallel development. If I were a mind running at 1,000,000 times speed I would branch my project into a hundred different miniprojects (my todo list for my current game has well over a 100 well separated items that would merge well). I would write the code for all of them and then compile them all in a giant batch in parallel. This would give me a 100 fold speed up, on top of the 10 fold speed up, giving me a 1000 fold speed up in total. I won’t claim that this would be a comfortable way to code, but it could be done.
My larger point was that the effective speedup across the space of tasks/fields is highly uneven and a superfast thinker would get the most utility out of low-computational occupations that involve abstract thinking, such as writing.
50 years ago computers were much, much slower, but human minds were just as fast as they are today. Was it optimal back then to be a writer rather than a programmer? (Edit: This is a bit of a straw man, but I think it still shines some light on the issue at hand)
This would give me a 100 fold speed up, on top of the 10 fold speed up, giving me a 1000 fold speed up in total. I won’t claim that this would be a comfortable way to code, but it could be done.
While this seems possible in principle, it doesn’t sound as practical as the approach of massively parallelizing one or a smaller set of projects.
The problem is you write project 1 and then by the time it finishes in say 30 seconds a year has gone by and you have just finished writing code for project 100. The problem would be the massive subjective lag for getting any debugging feedback and the overhead of remembering what you were working on a year ago. You then make changes and it’s a year turnaround again to test them . . .
I suspect that making a massively parallel compiler/linker/language to help close the speed gap somewhat would be the more effective primary strategy.
My larger point was that the effective speedup across the space of tasks/fields is highly uneven and a superfast thinker would get the most utility out of low-computational occupations that involve abstract thinking, such as writing.
50 years ago computers were much, much slower, but human minds were just as fast as they are today. Was it optimal back then to be a writer rather than a programmer?
If you thought one million times faster than any other human minds, then absolutely. Not an analogy at all. There is no analogy.
The problem would be the massive subjective lag for getting any debugging feedback and the overhead of remembering what you were working on a year ago.
Yes, I admit it would not be an ideal coding environment, but it could be done. Brain-time is cheap, so you have plenty of cycles to spare relearning your code from scratch after each debug cycle. You also have plenty of time to spare to write immaculate documentation, to ease the relearning process.
I suspect that making a massively parallel compiler/linker/language would be the most effective.
I agree. It would be my first project. Even if it took 100,000 years that’s only a month real time! Hopefully I wouldn’t go insane before finishing =D
Thinking about an algorithm and taking the time to actually write it is typically the overwhelming bottleneck in algorithm research. I do scientific computing, which I think is on the extreme end of having run time be a substantial portion of total development time. Some of my simulations take days/weeks to finish. Even in that case I would wager CPU time is only 1% of my total development time.
Moreover, if I were willing to sacrifice non-CPU time, I could probably parallelize a lot of my CPU time. Instead of testing 1 idea at a time I could enumerate 1,000 ideas and then test them all simultaneously.
I agree that a 1,000,000 times accelerated human would not be as powerful as a 1,000,000 times accelerated human with a 1,000,000 times accelerated computer, but I suspect the accelerated human would still get thousands of times as much work done.
I’m not sure how you can thus justify the 1% number. If you’ve had a single simulation take a week to finish, then you have already used up 1% of your development time for 2 years.
I work in video games, specifically via cloud computing, and waiting on various computer tasks can easily take up 10%-30% of my time per day.
But even using your 1% number, the speedup would be 100x, not 1,000x.
Yes a massively accelerated human would still be able to do research and engineering faster than humans.
My larger point was that the effective speedup across the space of tasks/fields is highly uneven and a superfast thinker would get the most utility out of low-computational occupations that involve abstract thinking, such as writing.
Good point. I tend to discount long chunks of CPU time because I usually overlap them with personal mini-vacations =P. Thinking about it in more detail I probably spend about 10% of my time waiting for simulations to finish.
Disregarding doing multiple tests/simulations/compilations at once.
I do game dev on the side. The projects are much larger than my scientific projects and allow for much more room for parallel development. If I were a mind running at 1,000,000 times speed I would branch my project into a hundred different miniprojects (my todo list for my current game has well over a 100 well separated items that would merge well). I would write the code for all of them and then compile them all in a giant batch in parallel. This would give me a 100 fold speed up, on top of the 10 fold speed up, giving me a 1000 fold speed up in total. I won’t claim that this would be a comfortable way to code, but it could be done.
50 years ago computers were much, much slower, but human minds were just as fast as they are today. Was it optimal back then to be a writer rather than a programmer? (Edit: This is a bit of a straw man, but I think it still shines some light on the issue at hand)
While this seems possible in principle, it doesn’t sound as practical as the approach of massively parallelizing one or a smaller set of projects.
The problem is you write project 1 and then by the time it finishes in say 30 seconds a year has gone by and you have just finished writing code for project 100. The problem would be the massive subjective lag for getting any debugging feedback and the overhead of remembering what you were working on a year ago. You then make changes and it’s a year turnaround again to test them . . .
I suspect that making a massively parallel compiler/linker/language to help close the speed gap somewhat would be the more effective primary strategy.
If you thought one million times faster than any other human minds, then absolutely. Not an analogy at all. There is no analogy.
Yes, I admit it would not be an ideal coding environment, but it could be done. Brain-time is cheap, so you have plenty of cycles to spare relearning your code from scratch after each debug cycle. You also have plenty of time to spare to write immaculate documentation, to ease the relearning process.
I agree. It would be my first project. Even if it took 100,000 years that’s only a month real time! Hopefully I wouldn’t go insane before finishing =D