LLMs are not like the other hypothetical AGIs, they have human behavior as a basic part of them, channeled directly. So they are probably more like uploads than AIs, including for alignment purposes.
Most standard arguments about alignment of AIs (like world-eating instrumental convergence or weight of simple consistent preferences) aren’t relevant to them, no more than to humans. But the serial speedup in thinking is still there, so they have an advantage in the impending sequence of events that’s too fast for humans to follow or meaningfully direct.
I realized this myself, just a week ago! And you also highlight something that wasn’t clear to me: for now, their important property (with respect to singularity) is
the serial speedup in thinking … too fast for humans to follow or meaningfully direct
LLMs are a kind of human-level AI—but certainly not yet genius-level human. However, they are already inhumanly fast.
LLMs are not like the other hypothetical AGIs, they have human behavior as a basic part of them, channeled directly. So they are probably more like uploads than AIs, including for alignment purposes.
Most standard arguments about alignment of AIs (like world-eating instrumental convergence or weight of simple consistent preferences) aren’t relevant to them, no more than to humans. But the serial speedup in thinking is still there, so they have an advantage in the impending sequence of events that’s too fast for humans to follow or meaningfully direct.
I realized this myself, just a week ago! And you also highlight something that wasn’t clear to me: for now, their important property (with respect to singularity) is
LLMs are a kind of human-level AI—but certainly not yet genius-level human. However, they are already inhumanly fast.
No, it’s indirect, because it’s via text and training.