It’s true we shouldn’t mistreat sentient AGI systems any more than we should mistreat humans; but we’re in the position of having to decide what kind of AGI systems to build, with finite resources.
That’s not how R&D works. The early versions of something, you need the freedom to experiment, and early versions of an idea need to be both simple and well instrumented.
One reason your first AI might be a ‘paperclip maximizer’ is simply that’s less code to fight with. Certainly, OpenAI’s papers that’s basically what all their systems are. (they don’t have the ability or capacity to allocate additional resources which seems to be the key step that makes a paperclip maximizer dangerous)
It’s true we shouldn’t mistreat sentient AGI systems any more than we should mistreat humans; but we’re in the position of having to decide what kind of AGI systems to build, with finite resources.
That’s not how R&D works. The early versions of something, you need the freedom to experiment, and early versions of an idea need to be both simple and well instrumented.
One reason your first AI might be a ‘paperclip maximizer’ is simply that’s less code to fight with. Certainly, OpenAI’s papers that’s basically what all their systems are. (they don’t have the ability or capacity to allocate additional resources which seems to be the key step that makes a paperclip maximizer dangerous)