I agree that scaffolding can take us a long way towards AGI, but I’d be very surprised if GPT4 as core model was enough.
Yup, that wasn’t a critique, I just wanted to note something. By “seed of deception” I mean that the model may learn to use this ambiguity more and more, if that’s useful for passing some evals, while helping it do some computation unwanted by humans.
I see, so maybe in ways which are weird to humans to think about.
Leaving this comment to make a public prediction that I expect GPT4 to be enough for about human level AGI with the propper scaffolding with more than 50% confidence.
I agree that scaffolding can take us a long way towards AGI, but I’d be very surprised if GPT4 as core model was enough.
Yup, that wasn’t a critique, I just wanted to note something. By “seed of deception” I mean that the model may learn to use this ambiguity more and more, if that’s useful for passing some evals, while helping it do some computation unwanted by humans.
I see, so maybe in ways which are weird to humans to think about.
Leaving this comment to make a public prediction that I expect GPT4 to be enough for about human level AGI with the propper scaffolding with more than 50% confidence.