at what point would you expect the average (non-rationalist) AI researcher to accept that they’ve created an AGI?
Easy answers first: the average AI researcher will accept it when others do.
at what point would you be convinced that human-level AGI has been achieved?
When the preponderance of evidence is heavily weighted in this direction. In one simple class of scenario this would involve unprecedented progress in areas limited by things like human attention, memory, io bandwidth, etc. Some of these would likely not escape public attention. But there are a lot of directions AGI can go.
Could you give a specific hypothetical if you have the time? What would be a specific example of a scenario that you’d look at and go “welp, that’s AGI!” Asking since I can imagine most individual accomplishments being brushed away as “oh, guess that was easier than we thought”
Easy answers first: the average AI researcher will accept it when others do.
When the preponderance of evidence is heavily weighted in this direction. In one simple class of scenario this would involve unprecedented progress in areas limited by things like human attention, memory, io bandwidth, etc. Some of these would likely not escape public attention. But there are a lot of directions AGI can go.
Could you give a specific hypothetical if you have the time? What would be a specific example of a scenario that you’d look at and go “welp, that’s AGI!” Asking since I can imagine most individual accomplishments being brushed away as “oh, guess that was easier than we thought”