though as Geoff Hinton has pointed out, ‘confabulations’ might be a better word
I think yann lecun was the first one to using this word https://twitter.com/ylecun/status/1667272618825723909
though as Geoff Hinton has pointed out, ‘confabulations’ might be a better word
I think yann lecun was the first one to using this word https://twitter.com/ylecun/status/1667272618825723909
not much information is given regarding that so far, i was curious about that too
“Algorithm for Concept Extrapolation”
I don’t see any recent publications for paul christiano related to this. So i guess the problem[s] is still open.
parameters before L is less than ,
should this be after?
AutoGPT was created by a non-coding VC
It looks like you are confusing autoGPT with babyagi which was created by yohei nakajima who is a VC. the creator of autoGPT (Toran Bruce Richards) is a game-developer with a decent programming (game-development) experience. Even the figure shown here is that from babyagi (https://yoheinakajima.com/task-driven-autonomous-agent-utilizing-gpt-4-pinecone-and-langchain-for-diverse-applications/).
47 layers layer
47 layers later ?
really interesting idea.
Regarding the first one i am not expecting a single-prompt to generate the entirity of enwiki8/9. I am more interested in finding a set of prompts with a lookup table if possible to replicate enwiki data.
Thanks for the pointer for chincilla post, will look into it.
not yet
This requires hitting a window—our data needs to be good enough that the system can tell it should use human values as a proxy, but bad enough that the system can’t figure out the specifics of the data-collection process enough to model it directly. This window may not even exist.
are there any real world examples of this? not necessarily in human-values setting
From a complexity theoretic viewpoint, how hard could ELK be? is there any evidence that ELK is decidable?
is there a separate post for “train a reporter that is useful for another AI” proposal?
silly idea: instead of thought-annotating ai-dungeon plays, we can start with annotating thoughts for akinator gameruns.
pros: much more easier and faster way to build a dataset, with less ambiguity
cons: somewhat restricted than the original idea.
The proof need not be bogus, it can be a long valid proof but since you are describing the problem in natural language, the proof generated by the AGI need not be for the problem that you described.
Thanks for the feedback! working on refining the writeup.