It’s less about GPT-style in particular and more “gradient decent producing black boxes”-style in general.
The claim goes that if we develop AGI this way we are doomed. And we are on the track to do it.
It’s less about GPT-style in particular and more “gradient decent producing black boxes”-style in general.
The claim goes that if we develop AGI this way we are doomed. And we are on the track to do it.