The concept you’re trying to convey might become more obvious if you used thought bubbles instead of arrows. Have the humans imagine the artificial brain, and it appears; then have the artificial brain imagine a bigger version of itself, and it grows; and so forth. (This will involve more frames in a larger .gif, but I think it will make the process clearer.)
It doesn’t make as much sense without the context of showing the parochial human picture first, and I’m worried that without that context it’ll just come across as hyperbole. “The AI will be thiiiiiiiiiiis much smarter than Einstein!!!” It also suggests too strong a connection between recursive self-improvement and a specific level of intelligence.
Like. The big problem in explaining intelligence explosions is not explaining the process—in my experience, people grasp the process very intuitively from even my unclear explanations. The big problem is communicating the end result: recursive self-improvement takes AI off the far end of the human scale of intelligence. (The process might only be disputed as a way to reject the end result.) This image does a lot of that work right away.
Anybody have an idea for how to represent intelligence explosion graphically?
The concept you’re trying to convey might become more obvious if you used thought bubbles instead of arrows. Have the humans imagine the artificial brain, and it appears; then have the artificial brain imagine a bigger version of itself, and it grows; and so forth. (This will involve more frames in a larger .gif, but I think it will make the process clearer.)
Animated GIFs look unprofessional.
That is a problem. What do ya’ll think of the new image?
It doesn’t make as much sense without the context of showing the parochial human picture first, and I’m worried that without that context it’ll just come across as hyperbole. “The AI will be thiiiiiiiiiiis much smarter than Einstein!!!” It also suggests too strong a connection between recursive self-improvement and a specific level of intelligence.
Where’s EY?
(More seriously: that image looks much nicer)
Like. The big problem in explaining intelligence explosions is not explaining the process—in my experience, people grasp the process very intuitively from even my unclear explanations. The big problem is communicating the end result: recursive self-improvement takes AI off the far end of the human scale of intelligence. (The process might only be disputed as a way to reject the end result.) This image does a lot of that work right away.
Probably too silly to use here, but one thing that comes to mind is a brain reshaped to have the form of a nuclear mushroom.
That might be misinterpreted to mean “mind blowing.”
Maybe has the wrong connotations :P
(λf.(λx.f (x x)) (λx.f (x x))) {image of a brain}
What lambda expression grows exponentially with each evaluation?
It’s called the Y combinator. If evaluated lazily it wont necessarily run forever.