I don’t know what our terminal goals are (more precisely than “positive emotions”). I think it doesn’t matter insofar as the answer to “what should we do” is “work on AI alignment” either way. Modulo that, yeah there are some open questions.
On the thesis of suffering requiring higher order cognition in particular, I have to say that sounds incredibly implausible (for I think fairly obvious reasons involving evolution).
I don’t know what our terminal goals are (more precisely than “positive emotions”). I think it doesn’t matter insofar as the answer to “what should we do” is “work on AI alignment” either way. Modulo that, yeah there are some open questions.
On the thesis of suffering requiring higher order cognition in particular, I have to say that sounds incredibly implausible (for I think fairly obvious reasons involving evolution).