I don’t see the need to remove human properties we cherish to remove/reduce murder nor rape. Maybe war. Rape especially, is never defensive. That is an extremely linear way of connecting these.
Particularly, it is the claim that rape is in any way remotely related to erotic excitement that I found outrageous. That is simply wrong, and is an extremely narrow and perpetrator centered way of thinking. Check out the link that is linked in my comment if you want to learn more on myths about rape.
For the do not care point—my personal belief is that if one believes that suffering risks should not be taking up resources to be worked on, even not oneself but also other people, I do not see too much of a difference on with not care from consequentialist point of view/simply practically, or the degree of caring is small/not big enough to agree on allocation of resources/attention, while in my initial message, I have already linked multiple cases recently that has happened. (The person mentioned that working on these are “dangerous distractions”, not for himself, “ourselves”, seems to say society as a whole.) Again, you do not need to emphasis the importance of AI risks by downplaying other social issues. That is not rational.
Finally, the thread post is not on the user, but on their views; it is a bit standalone to reflect on the danger of not having correct awareness or attention on these issues collectively, and how it is related to successful AI alignment as well.
Thanks for commenting.
I don’t see the need to remove human properties we cherish to remove/reduce murder nor rape. Maybe war. Rape especially, is never defensive. That is an extremely linear way of connecting these.
Particularly, it is the claim that rape is in any way remotely related to erotic excitement that I found outrageous. That is simply wrong, and is an extremely narrow and perpetrator centered way of thinking. Check out the link that is linked in my comment if you want to learn more on myths about rape.
For the do not care point—my personal belief is that if one believes that suffering risks should not be taking up resources to be worked on, even not oneself but also other people, I do not see too much of a difference on with not care from consequentialist point of view/simply practically, or the degree of caring is small/not big enough to agree on allocation of resources/attention, while in my initial message, I have already linked multiple cases recently that has happened. (The person mentioned that working on these are “dangerous distractions”, not for himself, “ourselves”, seems to say society as a whole.) Again, you do not need to emphasis the importance of AI risks by downplaying other social issues. That is not rational.
Finally, the thread post is not on the user, but on their views; it is a bit standalone to reflect on the danger of not having correct awareness or attention on these issues collectively, and how it is related to successful AI alignment as well.