The important thing to notice is that all existing AIs are completely devoid of agency. And this is very good! Even if continued development of LLMs and image networks surpasses human performance pretty quickly, the current models are fundamentally incapable of doing anything dangerous of their own accord. They might be dangerous, but they’re dangerous the way a nuclear reactor is dangerous: a bad operator could cause damage, but it’s not going to go rogue on its own.
The models that are currently available are probably incapable of autonomous progress, but LLMs might be almost ready to pass that mark after getting day-long context windows and sometuning.
A nuclear reactor doesn’t try to convince you intellectually with speech or text so that you behave in a way you would not have before interacting with the nuclear reactor. And that is assuming your statement ‘current LLMs are not agentic’ holds true, which seems doubtful.
ChatGPT also doesn’t try to convince you of anything. If you explicitly ask it why it should do X, it will tell you, much like it will tell you anything else you ask it for, but it doesn’t provide this information unprompted, nor does it weave it into unrelated queries.
The important thing to notice is that all existing AIs are completely devoid of agency. And this is very good! Even if continued development of LLMs and image networks surpasses human performance pretty quickly, the current models are fundamentally incapable of doing anything dangerous of their own accord. They might be dangerous, but they’re dangerous the way a nuclear reactor is dangerous: a bad operator could cause damage, but it’s not going to go rogue on its own.
The models that are currently available are probably incapable of autonomous progress, but LLMs might be almost ready to pass that mark after getting day-long context windows and some tuning.
A nuclear reactor doesn’t try to convince you intellectually with speech or text so that you behave in a way you would not have before interacting with the nuclear reactor. And that is assuming your statement ‘current LLMs are not agentic’ holds true, which seems doubtful.
ChatGPT also doesn’t try to convince you of anything. If you explicitly ask it why it should do X, it will tell you, much like it will tell you anything else you ask it for, but it doesn’t provide this information unprompted, nor does it weave it into unrelated queries.