if you had to imagine a better model of AI for a disorganized species to trip into, could you get safer than LLMs?
Conjecture’s CoEms, which are meant to be cognitively anthropomorphic and transparently interpretable. (They remind me a bit of the Chomsky-approved concept of “anthronoetic AI”.)
Conjecture’s CoEms, which are meant to be cognitively anthropomorphic and transparently interpretable. (They remind me a bit of the Chomsky-approved concept of “anthronoetic AI”.)