True enough that n AGI won’t have the same “emotional loop” as humans, and that could be grounds for risk, of some kind. Not clear if such “feelings” at that level are actually needed, and no one seems to have concerns about loss of such an ability from mind uploading (so perhaps it’s just a bias against machines?).
Also true that current levels of compute are enough for an AGI, and you at least hint at a change in architecture.
However, for the rest of the post, your descriptions are strictly talking about machine learning. It’s my continued contention that we don’t reach AGI under current paradigms, making such arguments about AGI risk moot.
True enough that n AGI won’t have the same “emotional loop” as humans, and that could be grounds for risk, of some kind. Not clear if such “feelings” at that level are actually needed, and no one seems to have concerns about loss of such an ability from mind uploading (so perhaps it’s just a bias against machines?).
Also true that current levels of compute are enough for an AGI, and you at least hint at a change in architecture.
However, for the rest of the post, your descriptions are strictly talking about machine learning. It’s my continued contention that we don’t reach AGI under current paradigms, making such arguments about AGI risk moot.