Holden seems to think this sort of development would happen naturally with the sort of AGI researchers we have nowadays,
I may have the terminology wrong, but I believe he’s thinking more about commercial narrow-AI researchers.
Now if they produce results like these, that would push the culture farther towards letting computer programs handle any hard task. Programming seems hard.
I may have the terminology wrong, but I believe he’s thinking more about commercial narrow-AI researchers.
Now if they produce results like these, that would push the culture farther towards letting computer programs handle any hard task. Programming seems hard.