I work mainly in graphics/GPU work. I had a longer explanation of why rendering at 60 million FPS would be difficult, but I cut it for brevity.
Let’s say you had ten trillion billion GPU chips. So what?
Both the neuromorphic brain and the GPU run on the same substrate and operate at the same speeds—gigahertz roughly. So it experiences about one subjective second every 100 to 1,000 clock cycles.
So this means the GPU will need to compute pixel values in just a few dozen clock cycles (to produce 60 frames per second).
So the problem is not one that can be solved by even infinite parallelization of current GPUs. You can parallelize only to the point where you have one little arithmetic unit working on each pixel. At that point everything breaks down.
There is no rendering pipeline i’m aware of that could possibly execute in just a few dozen or hundred clock cycles. I think the current minimum is on the order of a million clock cycles or so. (1,000 FPS)
So in the farther future, once GPUs are parallel to the point of one thread per pixel, at that point you may see them optimizing for minimal clock step path, but that is still a ways away—perhaps a decade.
The brain is at the end of a long natural path that we will find our machines taking for the same physical constraints. You go massively parallel to get the most energy efficient computational use of your memory, and then you must optimize for extremely short computational circuits and massively high fan-in / fan-out.
with a 1,000,000 fold speed up in subjective time you probably have the luxury of developing great parallel algorithms.
You could think about them, but you could not actually load programs, compile code, or debug and run programs any faster than a human.
Thus I find it highly likely you would focus your energies on low-computational endeavors that could run at your native speed.
That’s an interesting insight. There should be another path though: visual imagination, which already runs at (roughly?) the same speed as visual perception. We can already detect the images someone is imagining to some extent, and with uploads directly putting images into their visual cortex should be comparatively straightforward, so we can skip all that rendering geometric forms into pixels and decoding pixels back into geometric forms stuff. If you want the upload to see a black dog you just stimulate “black” and “dog” rather than painting anything.
Yes! I suspect that eventually this could be an interesting application of cheap memristor/neuromorphic designs, if they become economically viable.
It should be possible to exploit the visual imagination/dreaming circuity the brain has and make it more consciously controllable for an AGI, perhaps even to the point of being able to enter lucid dream worlds while fully conscious.
You could think about them, but you could not actually load programs, compile code, or debug and run programs any faster than a human.
Thinking about an algorithm and taking the time to actually write it is typically the overwhelming bottleneck in algorithm research. I do scientific computing, which I think is on the extreme end of having run time be a substantial portion of total development time. Some of my simulations take days/weeks to finish. Even in that case I would wager CPU time is only 1% of my total development time.
Moreover, if I were willing to sacrifice non-CPU time, I could probably parallelize a lot of my CPU time. Instead of testing 1 idea at a time I could enumerate 1,000 ideas and then test them all simultaneously.
I agree that a 1,000,000 times accelerated human would not be as powerful as a 1,000,000 times accelerated human with a 1,000,000 times accelerated computer, but I suspect the accelerated human would still get thousands of times as much work done.
I do scientific computing, which I think is on the extreme end of having run time be a substantial portion of total development time. Some of my simulations take days/weeks to finish. Even in that case I would wager CPU time is only 1% of my total development time.
I’m not sure how you can thus justify the 1% number. If you’ve had a single simulation take a week to finish, then you have already used up 1% of your development time for 2 years.
I work in video games, specifically via cloud computing, and waiting on various computer tasks can easily take up 10%-30% of my time per day.
But even using your 1% number, the speedup would be 100x, not 1,000x.
Yes a massively accelerated human would still be able to do research and engineering faster than humans.
My larger point was that the effective speedup across the space of tasks/fields is highly uneven and a superfast thinker would get the most utility out of low-computational occupations that involve abstract thinking, such as writing.
If you’ve had a single simulation take a week to finish, then you have already used up 1% of your development time for 2 years.
Good point. I tend to discount long chunks of CPU time because I usually overlap them with personal mini-vacations =P. Thinking about it in more detail I probably spend about 10% of my time waiting for simulations to finish.
But even using your 1% number, the speedup would be 100x, not 1,000x.
Disregarding doing multiple tests/simulations/compilations at once.
I work in video games, specifically via cloud computing, and waiting on various computer tasks can easily take up 10%-30% of my time per day.
I do game dev on the side. The projects are much larger than my scientific projects and allow for much more room for parallel development. If I were a mind running at 1,000,000 times speed I would branch my project into a hundred different miniprojects (my todo list for my current game has well over a 100 well separated items that would merge well). I would write the code for all of them and then compile them all in a giant batch in parallel. This would give me a 100 fold speed up, on top of the 10 fold speed up, giving me a 1000 fold speed up in total. I won’t claim that this would be a comfortable way to code, but it could be done.
My larger point was that the effective speedup across the space of tasks/fields is highly uneven and a superfast thinker would get the most utility out of low-computational occupations that involve abstract thinking, such as writing.
50 years ago computers were much, much slower, but human minds were just as fast as they are today. Was it optimal back then to be a writer rather than a programmer? (Edit: This is a bit of a straw man, but I think it still shines some light on the issue at hand)
This would give me a 100 fold speed up, on top of the 10 fold speed up, giving me a 1000 fold speed up in total. I won’t claim that this would be a comfortable way to code, but it could be done.
While this seems possible in principle, it doesn’t sound as practical as the approach of massively parallelizing one or a smaller set of projects.
The problem is you write project 1 and then by the time it finishes in say 30 seconds a year has gone by and you have just finished writing code for project 100. The problem would be the massive subjective lag for getting any debugging feedback and the overhead of remembering what you were working on a year ago. You then make changes and it’s a year turnaround again to test them . . .
I suspect that making a massively parallel compiler/linker/language to help close the speed gap somewhat would be the more effective primary strategy.
My larger point was that the effective speedup across the space of tasks/fields is highly uneven and a superfast thinker would get the most utility out of low-computational occupations that involve abstract thinking, such as writing.
50 years ago computers were much, much slower, but human minds were just as fast as they are today. Was it optimal back then to be a writer rather than a programmer?
If you thought one million times faster than any other human minds, then absolutely. Not an analogy at all. There is no analogy.
The problem would be the massive subjective lag for getting any debugging feedback and the overhead of remembering what you were working on a year ago.
Yes, I admit it would not be an ideal coding environment, but it could be done. Brain-time is cheap, so you have plenty of cycles to spare relearning your code from scratch after each debug cycle. You also have plenty of time to spare to write immaculate documentation, to ease the relearning process.
I suspect that making a massively parallel compiler/linker/language would be the most effective.
I agree. It would be my first project. Even if it took 100,000 years that’s only a month real time! Hopefully I wouldn’t go insane before finishing =D
I work mainly in graphics/GPU work. I had a longer explanation of why rendering at 60 million FPS would be difficult, but I cut it for brevity.
Let’s say you had ten trillion billion GPU chips. So what?
Both the neuromorphic brain and the GPU run on the same substrate and operate at the same speeds—gigahertz roughly. So it experiences about one subjective second every 100 to 1,000 clock cycles.
So this means the GPU will need to compute pixel values in just a few dozen clock cycles (to produce 60 frames per second).
So the problem is not one that can be solved by even infinite parallelization of current GPUs. You can parallelize only to the point where you have one little arithmetic unit working on each pixel. At that point everything breaks down.
There is no rendering pipeline i’m aware of that could possibly execute in just a few dozen or hundred clock cycles. I think the current minimum is on the order of a million clock cycles or so. (1,000 FPS)
So in the farther future, once GPUs are parallel to the point of one thread per pixel, at that point you may see them optimizing for minimal clock step path, but that is still a ways away—perhaps a decade.
The brain is at the end of a long natural path that we will find our machines taking for the same physical constraints. You go massively parallel to get the most energy efficient computational use of your memory, and then you must optimize for extremely short computational circuits and massively high fan-in / fan-out.
You could think about them, but you could not actually load programs, compile code, or debug and run programs any faster than a human.
Thus I find it highly likely you would focus your energies on low-computational endeavors that could run at your native speed.
That’s an interesting insight. There should be another path though: visual imagination, which already runs at (roughly?) the same speed as visual perception. We can already detect the images someone is imagining to some extent, and with uploads directly putting images into their visual cortex should be comparatively straightforward, so we can skip all that rendering geometric forms into pixels and decoding pixels back into geometric forms stuff. If you want the upload to see a black dog you just stimulate “black” and “dog” rather than painting anything.
Yes! I suspect that eventually this could be an interesting application of cheap memristor/neuromorphic designs, if they become economically viable.
It should be possible to exploit the visual imagination/dreaming circuity the brain has and make it more consciously controllable for an AGI, perhaps even to the point of being able to enter lucid dream worlds while fully conscious.
Thinking about an algorithm and taking the time to actually write it is typically the overwhelming bottleneck in algorithm research. I do scientific computing, which I think is on the extreme end of having run time be a substantial portion of total development time. Some of my simulations take days/weeks to finish. Even in that case I would wager CPU time is only 1% of my total development time.
Moreover, if I were willing to sacrifice non-CPU time, I could probably parallelize a lot of my CPU time. Instead of testing 1 idea at a time I could enumerate 1,000 ideas and then test them all simultaneously.
I agree that a 1,000,000 times accelerated human would not be as powerful as a 1,000,000 times accelerated human with a 1,000,000 times accelerated computer, but I suspect the accelerated human would still get thousands of times as much work done.
I’m not sure how you can thus justify the 1% number. If you’ve had a single simulation take a week to finish, then you have already used up 1% of your development time for 2 years.
I work in video games, specifically via cloud computing, and waiting on various computer tasks can easily take up 10%-30% of my time per day.
But even using your 1% number, the speedup would be 100x, not 1,000x.
Yes a massively accelerated human would still be able to do research and engineering faster than humans.
My larger point was that the effective speedup across the space of tasks/fields is highly uneven and a superfast thinker would get the most utility out of low-computational occupations that involve abstract thinking, such as writing.
Good point. I tend to discount long chunks of CPU time because I usually overlap them with personal mini-vacations =P. Thinking about it in more detail I probably spend about 10% of my time waiting for simulations to finish.
Disregarding doing multiple tests/simulations/compilations at once.
I do game dev on the side. The projects are much larger than my scientific projects and allow for much more room for parallel development. If I were a mind running at 1,000,000 times speed I would branch my project into a hundred different miniprojects (my todo list for my current game has well over a 100 well separated items that would merge well). I would write the code for all of them and then compile them all in a giant batch in parallel. This would give me a 100 fold speed up, on top of the 10 fold speed up, giving me a 1000 fold speed up in total. I won’t claim that this would be a comfortable way to code, but it could be done.
50 years ago computers were much, much slower, but human minds were just as fast as they are today. Was it optimal back then to be a writer rather than a programmer? (Edit: This is a bit of a straw man, but I think it still shines some light on the issue at hand)
While this seems possible in principle, it doesn’t sound as practical as the approach of massively parallelizing one or a smaller set of projects.
The problem is you write project 1 and then by the time it finishes in say 30 seconds a year has gone by and you have just finished writing code for project 100. The problem would be the massive subjective lag for getting any debugging feedback and the overhead of remembering what you were working on a year ago. You then make changes and it’s a year turnaround again to test them . . .
I suspect that making a massively parallel compiler/linker/language to help close the speed gap somewhat would be the more effective primary strategy.
If you thought one million times faster than any other human minds, then absolutely. Not an analogy at all. There is no analogy.
Yes, I admit it would not be an ideal coding environment, but it could be done. Brain-time is cheap, so you have plenty of cycles to spare relearning your code from scratch after each debug cycle. You also have plenty of time to spare to write immaculate documentation, to ease the relearning process.
I agree. It would be my first project. Even if it took 100,000 years that’s only a month real time! Hopefully I wouldn’t go insane before finishing =D