perhaps that “an AGI” is not a binary yes/no, but just a capability slider. If that’s the case, then this approach indeed makes sense.
I also agree with this, for the record, and I think of AI capabilities in more quantitative ways, and less in qualitative ways, and I’m of the firm belief that the definition of AGI will get muddier and muddier into this decade, which is why I’m trying to avoid the morass that the term AGI invokes, and instead focus on quantitative distinctions between AIs and humans.
I expect there are still significant differences between your model and the “LLM Whisperer” model, though I notice I’m not quite sure what you’d say they are. Mind highlighting any cruxes you see?
If I did have issues with Janus World, it’s probably overestimating how much anthropomorphic reasoning gets us (to be clear I think a lot of people underestimate the power of anthropomorphic reasoning on LLMs), combined with them being far too sensational/mystical for my taste, which leads them to overrate the possibility of deceptive alignment IMO.
My biggest difference in models is probably that I use less anthropomorphic reasoning on LLMs than Janus World does.
I also agree with this, for the record, and I think of AI capabilities in more quantitative ways, and less in qualitative ways, and I’m of the firm belief that the definition of AGI will get muddier and muddier into this decade, which is why I’m trying to avoid the morass that the term AGI invokes, and instead focus on quantitative distinctions between AIs and humans.
I expect there are still significant differences between your model and the “LLM Whisperer” model, though I notice I’m not quite sure what you’d say they are. Mind highlighting any cruxes you see?
If I did have issues with Janus World, it’s probably overestimating how much anthropomorphic reasoning gets us (to be clear I think a lot of people underestimate the power of anthropomorphic reasoning on LLMs), combined with them being far too sensational/mystical for my taste, which leads them to overrate the possibility of deceptive alignment IMO.
My biggest difference in models is probably that I use less anthropomorphic reasoning on LLMs than Janus World does.