One does something unexpected in the real world, other does something unexpected within a simulator ( which it is viewing in ‘god’ mode (rather than via within-simulator sensors) as part of the AI ).
I would have thought the same before hearing about the AI-box experiment.
The relevant sort of agent is the one that builds and improves the model of the world—data is aquired through sensors—and works on that model, and which—when self improving—would improve the model in our sense of the word ‘improve’, instead of breaking it (improving it in some other sense).
In any case, none of modern tools, or the tools we know in principle how to write, would do something to you, no matter how many flops you give it. Many, though, given superhuman computing power, give results at superhuman level. (many are superhuman even with subhuman computing power, but some tasks are heavily parallelizable and/or benefit from massive databases of cached data, and on those tasks humans (when trained a lot) perform comparable to what you’d expect from roughly this much computing power as there is in human head)
I would have thought the same before hearing about the AI-box experiment.
What the hell does AI-box experiment have to do with it? The tool is not agent in a box.
They both are systems designed to not interact with the outside world except by communicating with the user.
They both run on computer, too. So what.
The relevant sort of agent is the one that builds and improves the model of the world—data is aquired through sensors—and works on that model, and which—when self improving—would improve the model in our sense of the word ‘improve’, instead of breaking it (improving it in some other sense).
In any case, none of modern tools, or the tools we know in principle how to write, would do something to you, no matter how many flops you give it. Many, though, given superhuman computing power, give results at superhuman level. (many are superhuman even with subhuman computing power, but some tasks are heavily parallelizable and/or benefit from massive databases of cached data, and on those tasks humans (when trained a lot) perform comparable to what you’d expect from roughly this much computing power as there is in human head)