I honestly don’t really get your point. It seems like the person who messaged you basically made the important point in the first sentence, and then the rest of this seems to be you doing a bunch of psychologizing.
I of course lack lots of context here, but I feel like you were trying to make a point that stands on its own.
I don’t really see anything wrong with the original DM. My guess is I agree with this DM and think we should be very hesitant to apply arbitrarily strong incentives to even quite bad things, since they might be correlated or a necessary form of collateral damage with things that are really important with human flourishing.
This of course doesn’t imply that I (or the person who DMd you) “do not care about war, rape, nor murder”, that seems like a crazy inference that’s directly contradicted by the first sentence you quote.
I don’t see the need to remove human properties we cherish to remove/reduce murder nor rape. Maybe war. Rape especially, is never defensive. That is an extremely linear way of connecting these.
Particularly, it is the claim that rape is in any way remotely related to erotic excitement that I found outrageous. That is simply wrong, and is an extremely narrow and perpetrator centered way of thinking. Check out the link that is linked in my comment if you want to learn more on myths about rape.
For the do not care point—my personal belief is that if one believes that suffering risks should not be taking up resources to be worked on, even not oneself but also other people, I do not see too much of a difference on with not care from consequentialist point of view/simply practically, or the degree of caring is small/not big enough to agree on allocation of resources/attention, while in my initial message, I have already linked multiple cases recently that has happened. (The person mentioned that working on these are “dangerous distractions”, not for himself, “ourselves”, seems to say society as a whole.) Again, you do not need to emphasis the importance of AI risks by downplaying other social issues. That is not rational.
Finally, the thread post is not on the user, but on their views; it is a bit standalone to reflect on the danger of not having correct awareness or attention on these issues collectively, and how it is related to successful AI alignment as well.
I honestly don’t really get your point. It seems like the person who messaged you basically made the important point in the first sentence, and then the rest of this seems to be you doing a bunch of psychologizing.
I of course lack lots of context here, but I feel like you were trying to make a point that stands on its own.
I don’t really see anything wrong with the original DM. My guess is I agree with this DM and think we should be very hesitant to apply arbitrarily strong incentives to even quite bad things, since they might be correlated or a necessary form of collateral damage with things that are really important with human flourishing.
This of course doesn’t imply that I (or the person who DMd you) “do not care about war, rape, nor murder”, that seems like a crazy inference that’s directly contradicted by the first sentence you quote.
Thanks for commenting.
I don’t see the need to remove human properties we cherish to remove/reduce murder nor rape. Maybe war. Rape especially, is never defensive. That is an extremely linear way of connecting these.
Particularly, it is the claim that rape is in any way remotely related to erotic excitement that I found outrageous. That is simply wrong, and is an extremely narrow and perpetrator centered way of thinking. Check out the link that is linked in my comment if you want to learn more on myths about rape.
For the do not care point—my personal belief is that if one believes that suffering risks should not be taking up resources to be worked on, even not oneself but also other people, I do not see too much of a difference on with not care from consequentialist point of view/simply practically, or the degree of caring is small/not big enough to agree on allocation of resources/attention, while in my initial message, I have already linked multiple cases recently that has happened. (The person mentioned that working on these are “dangerous distractions”, not for himself, “ourselves”, seems to say society as a whole.) Again, you do not need to emphasis the importance of AI risks by downplaying other social issues. That is not rational.
Finally, the thread post is not on the user, but on their views; it is a bit standalone to reflect on the danger of not having correct awareness or attention on these issues collectively, and how it is related to successful AI alignment as well.