I simply presumed that Eliezer was being sarcastic to get clicks, or offering an early April Fool’s joke to us.
There’s nothing “Terminator” about ChatGPT or GPT-4, it’s just a heavily trained, limited-focus, algorithm that “sounds” like a human. No consciousness, will, intent, being—just lots of code-lines executing on demand. The only thing about it that is interesting is that it demonstrates the limitations of the Turing Test.
It seems that if you hear something that sounds like a human you think they are one—the Turing Test turns out to be a test of the listener not the speaker.
And for more clarity:
A). Try a little focus—climate change will solve the problem for us long before GPT-257 takes over the planet.
B). And even if you disagree, there is zero chance of Eliezer’s hope for a universal agreement on anything by members of our species (where have you been for the last 500 millions years).
Not many people consider GPT-4 extremely dangerous on its own. Hooking up something at that level of intelligence into a larger system with memory storage and other modules is a bit more threatening, and probably sufficient to do great harm already if wielded by malevolent actors unleashing it onto social media platforms, for example.
The real danger is that GPT-4 is a mile marker we’ve blown by on the road to ever more capable AI. At some point, likely before climate change becomes an existential threat, we lose control and that’s when things get really weird, unpredictable, and dangerous.
Eliezer has near-zero hope for humanity’s survival. I think we’d all agree that the universal agreement he suggests is not something plausible in the current world. He’s not advocating for it because he believes it might happen but rather it’s the only thing he thinks might be enough to give us a shot at survival.
I simply presumed that Eliezer was being sarcastic to get clicks, or offering an early April Fool’s joke to us.
There’s nothing “Terminator” about ChatGPT or GPT-4, it’s just a heavily trained, limited-focus, algorithm that “sounds” like a human. No consciousness, will, intent, being—just lots of code-lines executing on demand. The only thing about it that is interesting is that it demonstrates the limitations of the Turing Test.
It seems that if you hear something that sounds like a human you think they are one—the Turing Test turns out to be a test of the listener not the speaker.
And for more clarity:
A). Try a little focus—climate change will solve the problem for us long before GPT-257 takes over the planet.
B). And even if you disagree, there is zero chance of Eliezer’s hope for a universal agreement on anything by members of our species (where have you been for the last 500 millions years).
Not many people consider GPT-4 extremely dangerous on its own. Hooking up something at that level of intelligence into a larger system with memory storage and other modules is a bit more threatening, and probably sufficient to do great harm already if wielded by malevolent actors unleashing it onto social media platforms, for example.
The real danger is that GPT-4 is a mile marker we’ve blown by on the road to ever more capable AI. At some point, likely before climate change becomes an existential threat, we lose control and that’s when things get really weird, unpredictable, and dangerous.
Eliezer has near-zero hope for humanity’s survival. I think we’d all agree that the universal agreement he suggests is not something plausible in the current world. He’s not advocating for it because he believes it might happen but rather it’s the only thing he thinks might be enough to give us a shot at survival.