This is becoming an ethical concern. At this point I would say if they don’t stop they are risking severe mindcrime. If the overview you quoted is correct, those entities are sentient enough to deserve moral consideration. But no one is going to give that consideration, and anyway this produces the risk of AGI which is bad for everyone. I wish companies like DeepMind would realize how terrifyingly dangerous what they’re doing is, and focus on perfecting narrow AIs for various different specific tasks instead.
If you want to discuss this more with me, I’d be interested! I had similar thoughts when reading the paper… especially when reading about all the zero-sum games they made these agents play.
I’m not sure what else to say on the matter but feel free to shoot me a DM.
Note that I’m not an AI expert or anything; the main reason I post here so rarely is that I’m more of an intuitive thinker, armchair philosopher, poet, etc than anything and I feel intimidated next to all you smart analytical people; my life’s goal though is to ensure that the first superintelligence is a human collective mind (via factored cognition, BCIs, etc), rather than an AGI, hence my statement about narrow AIs (I see AIs as plugins for the human brain, not as something that ought to exist as separate entities).
This is becoming an ethical concern. At this point I would say if they don’t stop they are risking severe mindcrime. If the overview you quoted is correct, those entities are sentient enough to deserve moral consideration. But no one is going to give that consideration, and anyway this produces the risk of AGI which is bad for everyone. I wish companies like DeepMind would realize how terrifyingly dangerous what they’re doing is, and focus on perfecting narrow AIs for various different specific tasks instead.
If you want to discuss this more with me, I’d be interested! I had similar thoughts when reading the paper… especially when reading about all the zero-sum games they made these agents play.
I’m not sure what else to say on the matter but feel free to shoot me a DM.
Note that I’m not an AI expert or anything; the main reason I post here so rarely is that I’m more of an intuitive thinker, armchair philosopher, poet, etc than anything and I feel intimidated next to all you smart analytical people; my life’s goal though is to ensure that the first superintelligence is a human collective mind (via factored cognition, BCIs, etc), rather than an AGI, hence my statement about narrow AIs (I see AIs as plugins for the human brain, not as something that ought to exist as separate entities).