Thanks for feedback, I am new to writing in this style and may have erred too much towards deleting sentences while editing. But, if you never cut too much you’re always too verbose, as they say. I in particular appreciate that, when talking about how I am updating, I should make clear where I am updating from.
For instance, regarding human level intelligence, I was also describing relative to “me a year/month ago”. I relistened to the Sam Harris/Yudkowsky podcast yesterday, and they detour for a solid 10 minutes about how “human level” intelligence is a straw target. I think their arguments were persuasive, and that I would have endorsed them a year ago, but that they don’t really apply to GPT. I had pretty much concluded that the difference between a 150 IQ AI and a 350 IQ AI would be a matter of scale. GPT as a simulator/platform seems to me like an existence proof for a not-artificially-handicapped human level AI attractor state. Since I had previous thought the entire idea was a distraction, this is an update towards human level AI.
The impact on AI timelines mostly follows from diversion of investment. I will think on if I have anything additional to add on that front.
I understand your reasoning much better now, thanks!
“GPT as a simulator/platform seems to me like an existence proof for a not-artificially-handicapped human level AI attractor state” is a great way to put it and a very important observation.
I think the attractor state is more nuanced than “human-level”. GPT is incentivized to learn to model “everyone everywhere all at once” if you will, a superhuman task—and while the default runtime behavior is human-level simulacra, I expect it to be possible to elicit superhuman performance by conditioning the model in certain ways or a relatively small amount of fine tuning/RL. Also, being simulated confers many advantages for intelligence (instances can be copied/forked, are much more programmable than humans, potentially run much faster, etc). So I generally think of the attractor state as being superhuman in some important dimensions, enough to be a serious foom concern.
Broadly, though, I agree with the framing—even if it’s somewhat superhuman, it’s extremely close to human-level and human-shaped intelligence compared to what’s possible in all of mindspace, and there is an additional unsolved technical challenge to escalate from human-level/slightly superhuman to significantly beyond that. You’re totally right that it removes the arbitrariness of “human-level” as a target/regime.
I’d love to see an entire post about this point, if you’re so inclined. Otherwise I might get around to writing something about it in a few months, lol.
Thanks for feedback, I am new to writing in this style and may have erred too much towards deleting sentences while editing. But, if you never cut too much you’re always too verbose, as they say. I in particular appreciate that, when talking about how I am updating, I should make clear where I am updating from.
For instance, regarding human level intelligence, I was also describing relative to “me a year/month ago”. I relistened to the Sam Harris/Yudkowsky podcast yesterday, and they detour for a solid 10 minutes about how “human level” intelligence is a straw target. I think their arguments were persuasive, and that I would have endorsed them a year ago, but that they don’t really apply to GPT. I had pretty much concluded that the difference between a 150 IQ AI and a 350 IQ AI would be a matter of scale. GPT as a simulator/platform seems to me like an existence proof for a not-artificially-handicapped human level AI attractor state. Since I had previous thought the entire idea was a distraction, this is an update towards human level AI.
The impact on AI timelines mostly follows from diversion of investment. I will think on if I have anything additional to add on that front.
I understand your reasoning much better now, thanks!
“GPT as a simulator/platform seems to me like an existence proof for a not-artificially-handicapped human level AI attractor state” is a great way to put it and a very important observation.
I think the attractor state is more nuanced than “human-level”. GPT is incentivized to learn to model “everyone everywhere all at once” if you will, a superhuman task—and while the default runtime behavior is human-level simulacra, I expect it to be possible to elicit superhuman performance by conditioning the model in certain ways or a relatively small amount of fine tuning/RL. Also, being simulated confers many advantages for intelligence (instances can be copied/forked, are much more programmable than humans, potentially run much faster, etc). So I generally think of the attractor state as being superhuman in some important dimensions, enough to be a serious foom concern.
Broadly, though, I agree with the framing—even if it’s somewhat superhuman, it’s extremely close to human-level and human-shaped intelligence compared to what’s possible in all of mindspace, and there is an additional unsolved technical challenge to escalate from human-level/slightly superhuman to significantly beyond that. You’re totally right that it removes the arbitrariness of “human-level” as a target/regime.
I’d love to see an entire post about this point, if you’re so inclined. Otherwise I might get around to writing something about it in a few months, lol.