“Public declarations would only be signaling, having little to do with maximizing good outcomes.”
On the contrary, trying to influence other people in the AI community to share Eliezer’s (apparent) concern for the suffering of animals is very important, for the reason given by David.
“I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one.”
a) Less Wrong doesn’t contain the best content on this topic.
b) Most of the posts disputing whether animal suffering matter are written by un-empathetic non-realists, so we would have to discuss meta-ethics and how to deal with meta-ethical uncertainty to convince them.
c) The reason has been given by Pablo Stafforini—when I directly experience the badness of suffering, I don’t only perceive that suffering is bad for me (or bad for someone with blonde hair, etc), but that suffering would be bad regardless of who experienced it (so long as they did actually have the subjective experience of suffering).
d) Even if there is some uncertainty about whether animal suffering is important, that would still require that it be taken quite seriously; even if there were only a 50% chance that other humans mattered, it would be bad to lock them up in horrible conditions, or signal through my actions to potentially influential people that doing so is OK.
c) The reason has been given by Pablo Stafforini—when I directly experience the badness of suffering, I don’t only perceive that suffering is bad for me (or bad for someone with blonde hair, etc), but that suffering would be bad regardless of who experienced it (so long as they did actually have the subjective experience of suffering).
This is an interesting argument, but it seems a bit truncated. Could you go into more detail?
“Public declarations would only be signaling, having little to do with maximizing good outcomes.”
On the contrary, trying to influence other people in the AI community to share Eliezer’s (apparent) concern for the suffering of animals is very important, for the reason given by David.
“I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one.”
a) Less Wrong doesn’t contain the best content on this topic. b) Most of the posts disputing whether animal suffering matter are written by un-empathetic non-realists, so we would have to discuss meta-ethics and how to deal with meta-ethical uncertainty to convince them. c) The reason has been given by Pablo Stafforini—when I directly experience the badness of suffering, I don’t only perceive that suffering is bad for me (or bad for someone with blonde hair, etc), but that suffering would be bad regardless of who experienced it (so long as they did actually have the subjective experience of suffering). d) Even if there is some uncertainty about whether animal suffering is important, that would still require that it be taken quite seriously; even if there were only a 50% chance that other humans mattered, it would be bad to lock them up in horrible conditions, or signal through my actions to potentially influential people that doing so is OK.
This is an interesting argument, but it seems a bit truncated. Could you go into more detail?
Where is the best content on this topic, in your opinion?
Eh? Unpack this, please.