experiment meaning, building and interacting with early-stage proto-AGI systems of various sorts.
I’m not very familiar with Goertzel’s ideas. Does he recognize the importance of not letting the proto-AGI systems self-improve while their values are uncertain?
From what I’ve gathered Ben thinks that these experiments will reveal that friendliness is impossible, that ‘be nice to humans’ is not a stable value. I’m not sure why he thinks this.
I’m not very familiar with Goertzel’s ideas. Does he recognize the importance of not letting the proto-AGI systems self-improve while their values are uncertain?
From what I’ve gathered Ben thinks that these experiments will reveal that friendliness is impossible, that ‘be nice to humans’ is not a stable value. I’m not sure why he thinks this.