Mmm, I would say the general shape of your view won’t clash with reality, but the magnitude of the impact will.
It’s plausible to me that a smart buyer will go and find the best deal for you when you tell it to buy laptop model X. It’s not plausible to me that you’ll be able to instruct it “buy an updated laptop for me whenever a new model comes out that is good value and sufficiently better than what I already have,” and then let it do its thing completely unsupervised (with direct access to your bank account). That’s what I mean by multiple complicated objectives.
What counts as “domain where correctness matters?” What counts as “very constrained set of actions?” Would e.g. a language-model-based assistant that can browse the internet and buy things for you on Amazon (with your permission of course) be in line with what you expect, or violate your expectations?
Something that goes beyond current widespread use of AI such as spam-filtering. Spam-filtering (or selecting ads on facebook, or flagging hate speech etc) is a domain where the AI is doing a huge number of identical tasks, and a certain % of wrong decisions is acceptable. One wrong decision won’t tank the business. Each copy of the task is done in an independent session (no memory).
An example application where that doesn’t hold is putting the AI in charge of ordering all the material inputs for your factory. Here, a single stupid mistake (didn’t buy something because the price will go down in the future, replaced one product with another, misinterpret seasonal cycles) will lead to a catastrophic stop of the entire operation.
(Also, what about Copilot? Isn’t it already an example of an application that genuinely works, and isn’t just in the twilight zone?)
Copilot is not autonomous. There’s a human tightly integrated into everything it’s doing. The jury is still out on if it works, i.e., do we have anything more than some programmers’ self reports to substantiate that it increases productivity? Even if it does work, it’s just a productivity tool for humans, not something that replaces humans at their tasks directly.
A distinction which makes no difference. Copilot-like models are already being used in autonomous code-writing ways, such as AlphaCode which executes generated code to check against test cases, or evolving code, or LaMDA calling out to a calculator to run expressions, or ChatGPT writing and then ‘executing’ its own code (or writing code like SVG which can be interpreted by the browser as an image), or Adept running large Transformers which generate & execute code in response to user commands, or the dozens of people hooking up the OA API to a shell, or… Tool AIs want to be agent AIs.
Mmm, I would say the general shape of your view won’t clash with reality, but the magnitude of the impact will.
It’s plausible to me that a smart buyer will go and find the best deal for you when you tell it to buy laptop model X. It’s not plausible to me that you’ll be able to instruct it “buy an updated laptop for me whenever a new model comes out that is good value and sufficiently better than what I already have,” and then let it do its thing completely unsupervised (with direct access to your bank account). That’s what I mean by multiple complicated objectives.
Something that goes beyond current widespread use of AI such as spam-filtering. Spam-filtering (or selecting ads on facebook, or flagging hate speech etc) is a domain where the AI is doing a huge number of identical tasks, and a certain % of wrong decisions is acceptable. One wrong decision won’t tank the business. Each copy of the task is done in an independent session (no memory).
An example application where that doesn’t hold is putting the AI in charge of ordering all the material inputs for your factory. Here, a single stupid mistake (didn’t buy something because the price will go down in the future, replaced one product with another, misinterpret seasonal cycles) will lead to a catastrophic stop of the entire operation.
Copilot is not autonomous. There’s a human tightly integrated into everything it’s doing. The jury is still out on if it works, i.e., do we have anything more than some programmers’ self reports to substantiate that it increases productivity? Even if it does work, it’s just a productivity tool for humans, not something that replaces humans at their tasks directly.
A distinction which makes no difference. Copilot-like models are already being used in autonomous code-writing ways, such as AlphaCode which executes generated code to check against test cases, or evolving code, or LaMDA calling out to a calculator to run expressions, or ChatGPT writing and then ‘executing’ its own code (or writing code like SVG which can be interpreted by the browser as an image), or Adept running large Transformers which generate & execute code in response to user commands, or the dozens of people hooking up the OA API to a shell, or… Tool AIs want to be agent AIs.