And this makes GPT-4 via API access a general purpose tool user generator, which it wouldn’t be as reliably if it wasn’t RLed into this capability. Turns out system message is not about enacting user-specified personalities, but about fluent use of user-specified tools.
So the big news is not ChatGPT plugins, those are just the demo of GPT-4 being a bureaucracy engine. Its impact is not automating the things that humans were doing, but creating a new programming paradigm, where applications have intelligent rule-followers sitting inside half of their procedures, who can invoke other procedures, something that nobody seriously tried to do with real humans, not on the scale of software, because it takes the kind of nuanced level of rule-following you’d need lawyers with domain-specific expertise for, multiple orders of magnitude more expensive than LLM API access, and too slow for most purposes.
Maybe. Depends on how good it gets. It is possible that gpt-4 with plugins it has learned to use well (so each query it doesn’t read the description of the plugin it just “knows” to use Wolfram alpha and it’s first query is properly formatted) will be functionally an AGI.
Not an AGI without it’s helpers but in terms of user utility, an AGI in that it has approximately the breadth and depth of skills of the average human being.
Plugins would exist where it can check its answers, look up all unique nouns for existence, check its url references all resolve, and so on.
And this makes GPT-4 via API access a general purpose tool user generator, which it wouldn’t be as reliably if it wasn’t RLed into this capability. Turns out system message is not about enacting user-specified personalities, but about fluent use of user-specified tools.
So the big news is not ChatGPT plugins, those are just the demo of GPT-4 being a bureaucracy engine. Its impact is not automating the things that humans were doing, but creating a new programming paradigm, where applications have intelligent rule-followers sitting inside half of their procedures, who can invoke other procedures, something that nobody seriously tried to do with real humans, not on the scale of software, because it takes the kind of nuanced level of rule-following you’d need lawyers with domain-specific expertise for, multiple orders of magnitude more expensive than LLM API access, and too slow for most purposes.
Maybe. Depends on how good it gets. It is possible that gpt-4 with plugins it has learned to use well (so each query it doesn’t read the description of the plugin it just “knows” to use Wolfram alpha and it’s first query is properly formatted) will be functionally an AGI.
Not an AGI without it’s helpers but in terms of user utility, an AGI in that it has approximately the breadth and depth of skills of the average human being.
Plugins would exist where it can check its answers, look up all unique nouns for existence, check its url references all resolve, and so on.