Just to note your last paragraph reminds me of Stuart Russel’s approach to AI alignment in Human Compatible. And I agree this sounds like a reasonable starting point.
There’s a tiny possibility he may have influenced my thinking. I did spend 6 months editing him, among others for a documentary.
Just to note your last paragraph reminds me of Stuart Russel’s approach to AI alignment in Human Compatible. And I agree this sounds like a reasonable starting point.
There’s a tiny possibility he may have influenced my thinking. I did spend 6 months editing him, among others for a documentary.