I am sympathetic to this viewpoint. However I think there are large-enough gains to be had from “just” an AI that: matches genius-level humans; has N times larger working memory; thinks N times faster; has “tools” (like calculators or Mathematica or Wikipedia) integrated directly into it’s “brain”; and is infinitely copyable. That gets you to https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message territory, which is quite x-risky.
Working memory is a particularly powerful intuition pump here, I think. Given that you can hold 8 items in working memory, it feels clear that something which can hold 800 or 8 million such items would feel qualitatively impressive. Even if it’s just a quantitative scale-up.
You can then layer on more speculative things from the ability to edit the neural net. E.g., for humans, we are all familiar with flow states where we are at our best, most-well-rested, most insightful. It’s possible that if your brain is artificial, reproducing that on demand is easier. The closest thing we have to this today is prompt engineering, which is a very crude way of steering AIs to use the “smarter” path when it’s navigating through their stack of transformer layers. But the existence of such paths likely means we, or an introspective AI, could consciously promote their use.
I am sympathetic to this viewpoint. However I think there are large-enough gains to be had from “just” an AI that: matches genius-level humans; has N times larger working memory; thinks N times faster; has “tools” (like calculators or Mathematica or Wikipedia) integrated directly into it’s “brain”; and is infinitely copyable. That gets you to https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message territory, which is quite x-risky.
Working memory is a particularly powerful intuition pump here, I think. Given that you can hold 8 items in working memory, it feels clear that something which can hold 800 or 8 million such items would feel qualitatively impressive. Even if it’s just a quantitative scale-up.
You can then layer on more speculative things from the ability to edit the neural net. E.g., for humans, we are all familiar with flow states where we are at our best, most-well-rested, most insightful. It’s possible that if your brain is artificial, reproducing that on demand is easier. The closest thing we have to this today is prompt engineering, which is a very crude way of steering AIs to use the “smarter” path when it’s navigating through their stack of transformer layers. But the existence of such paths likely means we, or an introspective AI, could consciously promote their use.