I would be interested in dialoguing about: - why the “imagine AI as a psychopath” analogy for possible doom is or isn’t appropriate (I brought this up a couple months ago and someone on LW told me Eliezer argued it was bad, but I couldn’t find the source and am curious about reasons)
- how to maintain meaning in life if it seems that everything you value doing is replaceable by AI (and mere pleasure doesn’t feel sustainable)
I would be interested in dialoguing about:
- why the “imagine AI as a psychopath” analogy for possible doom is or isn’t appropriate (I brought this up a couple months ago and someone on LW told me Eliezer argued it was bad, but I couldn’t find the source and am curious about reasons)
- how to maintain meaning in life if it seems that everything you value doing is replaceable by AI (and mere pleasure doesn’t feel sustainable)
I wouldn’t mind talking about the meaning thing you’re interested in.