(This should have been attached/replied to my first quick take, as opposed to a separate quick take.)
From someone’s DM to me:
“Regarding war, rape, and murder, it’s one thing to make them taboo, view them with disgust, and make them illegal. It’s another thing to want to see them disappear from the world entirely. They are side effects of human capacities that in other contexts we cherish, like the willingness to follow powerful leaders, to make sacrifices, to protect ourselves by fighting and punishing enemies who threaten or abuse us, and the ability to feel erotic excitement. This is why they keep happening.”
I am not sure if this person is 1) trying to promote “AI risks are more important than anything else” to emphasis its sole importance (based on other parts from this person’s DM) by promoting that other issues are not issues/dangerous (if so this is absolutely the wrong way, and also what I am worried about in the first place), or 2) truly believe in the above about “unavoidable side effects” of the abuse of power such as rape and see it as one sided erotic excitement based on sufferings from another by definition. They also comment that it is similar to having matches in homes without setting homes on fire—which is a poor analogy as matches are useful, but not rape intentions/intentions to harm other people on offense. I am not sure of this person’s conscious/unconscious motivation.
I especially hope this moral standard/level of awareness from this comment is not prevalent among our community—we will likely not going to have properly aligned AI, if we don’t care nor understand war, rape, nor murder, or view them as distractions to work on with any resources collectively. Not only we need to know they are wrong, so are AIs.
Finally, I would encourage posting these on the public comments, rather than DM. The person is aware of this. Be open and transparent.
Those are my words; I am the one accused of “low moral standards”, for refusing to embrace goals set by the poster.
I was going to say something in defense of myself, but I actually have other, humbler but real responsibilities, and in a moment of distraction just now, I almost made a mistake that would have diabolical consequences for the people depending on me.
Life is hard, it can be the effort of a lifetime just to save one other person, and you shouldn’t make such accusations so casually.
It is not really about you, it is about your views. You have reached out to me with a DM, to my initial post questioning whose beautiful world this is, after seeing/reflecting recent events about rape and various outrageous wars, and you seem to try to convince me those are not as important as AI safety.
It is not you do not embrace “my goals”, but seems but you believe , “working on addressing rape or war would be distractions to AI safety risks”.
You also do seem have a worrisome misinterpretation of what rape is. You have mentioned that “Since rape is a side effect of erotic excitement, and so removing rape would mean removing erotic excitement.” This claim is wrong. First of all rape is about power, and limited relation to erotic excitement. This view on “erotic excitement” is very perpetrator-centric. At best you could compare with murder, but I would argue it is worse than murder because the person enjoys humiliating another human. Removing rape does not mean removing erotic excitement.
And actually, if some of these “short term” issues are not worked on, these issues will likely be forever distractions/barriers, to these populations and family/friends of these population (at some point anyone could be a part of that population), on anything including their own lives, and their life goals (maybe AI safety).
[Edited: If people collectively believe this, there will be naturally more acceptance towards the harm one human does to another, because they would think “it is a part of the human nature”.]
I may have reached the point where I can crystallize my position into a few clear propositions:
First: I doubt the capacity of human beings using purely human methods like culture and the law, to 100% abolish any of these things. Reduce their frequency and their impact, yes.
Second: I believe the era in which the world is governed by “human beings using purely human methods” is rapidly coming to a close anyway, because of the rise of AI. Rather than the human condition being perfected in a triumph of good over evil, we are on track to be transformed and/or relegated to a minor role in the drama of being, in favor of new entities with quite different dispositions and concerns.
Third: I am not a sexologist or a criminologist, but I have serious doubts about an attempt to abolish rape, that wants to focus only on power and not on sexuality. I gather that this can be a good mindset for a recovering survivor, and any serious conversation on the topic of rape will necessarily touch on both power and sexuality anyway. But we’re talking abolition here, a condition that has never existed historically. As long as humanity contains beastly individuals who are simply indifferent to sympathy or seduction, and as long as some men and women drive enjoyment from playing out beastly scenarios, the ingredients will be there for rape to occur. That’s how it appears to me.
(Mentioned some of these in our chat, but allow me to be repetitive)
On first: I don’t think efforts to reduce rape or discrimination needs 100% abolition, but working towards that currently has huge return at this point of history. Humans should have self controls, and understand consents, and enforcing these would also solve the problem completely if that needs to achieve 100%. Education has snowball effect as well. Just because it is hard to achieve 100%, does not mean there should not be efforts, nor impossible at all. In fact, it is something that is rarely worked on alone; for example, one strategy might actually be to bring up education generally, or economic disparity, and during this process, teach people how to respect other people.
On second: We likely need good value system to align AI on, otherwise, the only alignment AI would know is probably not to overpower the most powerful human. But that does not seem to be the successful outcome of “aligned AI”. I think there are a few posts recently on this as well.
Third: I have seen many people having this confusion/mixed up: rape play/bdsm is consensual, and the definition of rape is non-consensual. Rape is purely about going against the person’s will. If you view it as murder it might be more comparable, but in this case, it is historically one group on to another group due to biological features and power differences that people cannot easily change on, though also a lot of men to men. In my view, it is worse than murder because it is extreme suffering, and that suffering will carry through the victims’ whole lives, and many may end with suicide anyways.
Otherwise, I am glad to see you thinking about these and open to discussion.
I honestly don’t really get your point. It seems like the person who messaged you basically made the important point in the first sentence, and then the rest of this seems to be you doing a bunch of psychologizing.
I of course lack lots of context here, but I feel like you were trying to make a point that stands on its own.
I don’t really see anything wrong with the original DM. My guess is I agree with this DM and think we should be very hesitant to apply arbitrarily strong incentives to even quite bad things, since they might be correlated or a necessary form of collateral damage with things that are really important with human flourishing.
This of course doesn’t imply that I (or the person who DMd you) “do not care about war, rape, nor murder”, that seems like a crazy inference that’s directly contradicted by the first sentence you quote.
I don’t see the need to remove human properties we cherish to remove/reduce murder nor rape. Maybe war. Rape especially, is never defensive. That is an extremely linear way of connecting these.
Particularly, it is the claim that rape is in any way remotely related to erotic excitement that I found outrageous. That is simply wrong, and is an extremely narrow and perpetrator centered way of thinking. Check out the link that is linked in my comment if you want to learn more on myths about rape.
For the do not care point—my personal belief is that if one believes that suffering risks should not be taking up resources to be worked on, even not oneself but also other people, I do not see too much of a difference on with not care from consequentialist point of view/simply practically, or the degree of caring is small/not big enough to agree on allocation of resources/attention, while in my initial message, I have already linked multiple cases recently that has happened. (The person mentioned that working on these are “dangerous distractions”, not for himself, “ourselves”, seems to say society as a whole.) Again, you do not need to emphasis the importance of AI risks by downplaying other social issues. That is not rational.
Finally, the thread post is not on the user, but on their views; it is a bit standalone to reflect on the danger of not having correct awareness or attention on these issues collectively, and how it is related to successful AI alignment as well.
On resource scarcity—I also find it difficult for humans to execute altruism without resources, and may be a privilege to even think about these (financially, educationally, health-wise, and information-wise. Would starting with something important but may or may not be the “most” important help? Sometimes I find the process of finding the “most” a bit less rational than people would expect/theorize.
Bias and self-preservation by human nature, but need correction
To expand from there, that is one thing that I am worried about. Our views/beliefs are by nature limited by our experiences, and if humans do not recognize or acknowledge that self-preserving natures, we will be claiming altruism but not actually doing them. It will also prevent us from establishing a really good system that incentivize people to do so.
To give an example, there was a manifold question (https://manifold.markets/Bayesian/would-you-rather-manifold-chooses?r=enlj), asking if manifold wants to “Push someone in front of a trolley to save 5 others (NO) -OR- Be the person someone else pushes in front of the trolley to save 5 others (YES)”. When I saw the stats before close, it was 92% (I chose YES, and lost, I would say my YES is with 85% confidence). While we propose we value people’s life equally, I see self-preservation persist. Then I see that could radiate to I prefer preservation of my family, friends, people you are more familiar with, etc.
Simple majority vs minority situation
This is similar with majority voting. When the population has 80% of people A, and 20% of people B, assuming equal vote/power for everyone, when there are issues that is in conflict between A and B, especially some sort of historical oppression of A towards B, then B’s concerns would rarely be addressed. (Hope humans nowadays are better than this, and more educated, but who knows).
Power
When power (social and economical) is added into the self-preservation, without a proper social contract, this will be even more messed-up. The decision maker of these problems will always the people with some sort of power.
This is true in reality, where the people who get to choose/plan what is the most important in reality and our society, are either the powerful people, or the population outnumbering others. Worse if both. Therefore, we see less passionate caring on minority issues, on a platform like this practicing rationality. With power, everything can go below it, including other people, policies, and law.
How do we create a world with max. freedom to people, with constraint on people do not harm other people, regardless of they have power or not/regardless if they will get caught or not? What level of “constraint” are historically powerful groups willing to accept? I am not sure of right balance/what is most effective yet; I have seen cases where some men were reluctant to receive education materials on consent, claiming they know, but they don’t. This is a very very small cost, but yet there are people who are not willing to even take that. This makes me frustrated.
Regardless, I do believe in the effectiveness of awareness, continuous education, law, and the need to foster a culture to cultivate humans that truly respect other lives.
Real life examples
I was not very aware of many related women’s suffering issues, until recently, from the Korean pop star group
(This should have been attached/replied to my first quick take, as opposed to a separate quick take.)
From someone’s DM to me:
“Regarding war, rape, and murder, it’s one thing to make them taboo, view them with disgust, and make them illegal. It’s another thing to want to see them disappear from the world entirely. They are side effects of human capacities that in other contexts we cherish, like the willingness to follow powerful leaders, to make sacrifices, to protect ourselves by fighting and punishing enemies who threaten or abuse us, and the ability to feel erotic excitement. This is why they keep happening.”
I am seeing—“side effects”, “rape → erotic excitement”—by going against another human’s will by definition. I guess I have to explain why this is not the case in the first place by linking https://valleycrisiscenter.org/sexual-assault/myths-about-sexual-assault/#:~:text=Reality%3A%20Rape%20is%20not%20about,him%20accountable%20for%20his%20actions. in 2024.
I am not sure if this person is 1) trying to promote “AI risks are more important than anything else” to emphasis its sole importance (based on other parts from this person’s DM) by promoting that other issues are not issues/dangerous (if so this is absolutely the wrong way, and also what I am worried about in the first place), or 2) truly believe in the above about “unavoidable side effects” of the abuse of power such as rape and see it as one sided erotic excitement based on sufferings from another by definition. They also comment that it is similar to having matches in homes without setting homes on fire—which is a poor analogy as matches are useful, but not rape intentions/intentions to harm other people on offense. I am not sure of this person’s conscious/unconscious motivation.
I especially hope this moral standard/level of awareness from this comment is not prevalent among our community—we will likely not going to have properly aligned AI, if we don’t care nor understand war, rape, nor murder, or view them as distractions to work on with any resources collectively. Not only we need to know they are wrong, so are AIs.
Finally, I would encourage posting these on the public comments, rather than DM. The person is aware of this. Be open and transparent.
[edited]
Those are my words; I am the one accused of “low moral standards”, for refusing to embrace goals set by the poster.
I was going to say something in defense of myself, but I actually have other, humbler but real responsibilities, and in a moment of distraction just now, I almost made a mistake that would have diabolical consequences for the people depending on me.
Life is hard, it can be the effort of a lifetime just to save one other person, and you shouldn’t make such accusations so casually.
It is not really about you, it is about your views. You have reached out to me with a DM, to my initial post questioning whose beautiful world this is, after seeing/reflecting recent events about rape and various outrageous wars, and you seem to try to convince me those are not as important as AI safety.
It is not you do not embrace “my goals”, but seems but you believe , “working on addressing rape or war would be distractions to AI safety risks”.
You also do seem have a worrisome misinterpretation of what rape is. You have mentioned that “Since rape is a side effect of erotic excitement, and so removing rape would mean removing erotic excitement.” This claim is wrong. First of all rape is about power, and limited relation to erotic excitement. This view on “erotic excitement” is very perpetrator-centric. At best you could compare with murder, but I would argue it is worse than murder because the person enjoys humiliating another human. Removing rape does not mean removing erotic excitement.
And actually, if some of these “short term” issues are not worked on, these issues will likely be forever distractions/barriers, to these populations and family/friends of these population (at some point anyone could be a part of that population), on anything including their own lives, and their life goals (maybe AI safety).
[Edited: If people collectively believe this, there will be naturally more acceptance towards the harm one human does to another, because they would think “it is a part of the human nature”.]
I may have reached the point where I can crystallize my position into a few clear propositions:
First: I doubt the capacity of human beings using purely human methods like culture and the law, to 100% abolish any of these things. Reduce their frequency and their impact, yes.
Second: I believe the era in which the world is governed by “human beings using purely human methods” is rapidly coming to a close anyway, because of the rise of AI. Rather than the human condition being perfected in a triumph of good over evil, we are on track to be transformed and/or relegated to a minor role in the drama of being, in favor of new entities with quite different dispositions and concerns.
Third: I am not a sexologist or a criminologist, but I have serious doubts about an attempt to abolish rape, that wants to focus only on power and not on sexuality. I gather that this can be a good mindset for a recovering survivor, and any serious conversation on the topic of rape will necessarily touch on both power and sexuality anyway. But we’re talking abolition here, a condition that has never existed historically. As long as humanity contains beastly individuals who are simply indifferent to sympathy or seduction, and as long as some men and women drive enjoyment from playing out beastly scenarios, the ingredients will be there for rape to occur. That’s how it appears to me.
(Mentioned some of these in our chat, but allow me to be repetitive)
On first: I don’t think efforts to reduce rape or discrimination needs 100% abolition, but working towards that currently has huge return at this point of history. Humans should have self controls, and understand consents, and enforcing these would also solve the problem completely if that needs to achieve 100%. Education has snowball effect as well. Just because it is hard to achieve 100%, does not mean there should not be efforts, nor impossible at all. In fact, it is something that is rarely worked on alone; for example, one strategy might actually be to bring up education generally, or economic disparity, and during this process, teach people how to respect other people.
On second: We likely need good value system to align AI on, otherwise, the only alignment AI would know is probably not to overpower the most powerful human. But that does not seem to be the successful outcome of “aligned AI”. I think there are a few posts recently on this as well.
Third: I have seen many people having this confusion/mixed up: rape play/bdsm is consensual, and the definition of rape is non-consensual. Rape is purely about going against the person’s will. If you view it as murder it might be more comparable, but in this case, it is historically one group on to another group due to biological features and power differences that people cannot easily change on, though also a lot of men to men. In my view, it is worse than murder because it is extreme suffering, and that suffering will carry through the victims’ whole lives, and many may end with suicide anyways.
Otherwise, I am glad to see you thinking about these and open to discussion.
I honestly don’t really get your point. It seems like the person who messaged you basically made the important point in the first sentence, and then the rest of this seems to be you doing a bunch of psychologizing.
I of course lack lots of context here, but I feel like you were trying to make a point that stands on its own.
I don’t really see anything wrong with the original DM. My guess is I agree with this DM and think we should be very hesitant to apply arbitrarily strong incentives to even quite bad things, since they might be correlated or a necessary form of collateral damage with things that are really important with human flourishing.
This of course doesn’t imply that I (or the person who DMd you) “do not care about war, rape, nor murder”, that seems like a crazy inference that’s directly contradicted by the first sentence you quote.
Thanks for commenting.
I don’t see the need to remove human properties we cherish to remove/reduce murder nor rape. Maybe war. Rape especially, is never defensive. That is an extremely linear way of connecting these.
Particularly, it is the claim that rape is in any way remotely related to erotic excitement that I found outrageous. That is simply wrong, and is an extremely narrow and perpetrator centered way of thinking. Check out the link that is linked in my comment if you want to learn more on myths about rape.
For the do not care point—my personal belief is that if one believes that suffering risks should not be taking up resources to be worked on, even not oneself but also other people, I do not see too much of a difference on with not care from consequentialist point of view/simply practically, or the degree of caring is small/not big enough to agree on allocation of resources/attention, while in my initial message, I have already linked multiple cases recently that has happened. (The person mentioned that working on these are “dangerous distractions”, not for himself, “ourselves”, seems to say society as a whole.) Again, you do not need to emphasis the importance of AI risks by downplaying other social issues. That is not rational.
Finally, the thread post is not on the user, but on their views; it is a bit standalone to reflect on the danger of not having correct awareness or attention on these issues collectively, and how it is related to successful AI alignment as well.
Relatedly on my first reply to the person’s DM:
On resource scarcity—I also find it difficult for humans to execute altruism without resources, and may be a privilege to even think about these (financially, educationally, health-wise, and information-wise. Would starting with something important but may or may not be the “most” important help? Sometimes I find the process of finding the “most” a bit less rational than people would expect/theorize.
Bias and self-preservation by human nature, but need correction
To expand from there, that is one thing that I am worried about. Our views/beliefs are by nature limited by our experiences, and if humans do not recognize or acknowledge that self-preserving natures, we will be claiming altruism but not actually doing them. It will also prevent us from establishing a really good system that incentivize people to do so.
To give an example, there was a manifold question (https://manifold.markets/Bayesian/would-you-rather-manifold-chooses?r=enlj), asking if manifold wants to “Push someone in front of a trolley to save 5 others (NO) -OR- Be the person someone else pushes in front of the trolley to save 5 others (YES)”. When I saw the stats before close, it was 92% (I chose YES, and lost, I would say my YES is with 85% confidence). While we propose we value people’s life equally, I see self-preservation persist. Then I see that could radiate to I prefer preservation of my family, friends, people you are more familiar with, etc.
Simple majority vs minority situation
This is similar with majority voting. When the population has 80% of people A, and 20% of people B, assuming equal vote/power for everyone, when there are issues that is in conflict between A and B, especially some sort of historical oppression of A towards B, then B’s concerns would rarely be addressed. (Hope humans nowadays are better than this, and more educated, but who knows).
Power
When power (social and economical) is added into the self-preservation, without a proper social contract, this will be even more messed-up. The decision maker of these problems will always the people with some sort of power.
This is true in reality, where the people who get to choose/plan what is the most important in reality and our society, are either the powerful people, or the population outnumbering others. Worse if both. Therefore, we see less passionate caring on minority issues, on a platform like this practicing rationality. With power, everything can go below it, including other people, policies, and law.
How do we create a world with max. freedom to people, with constraint on people do not harm other people, regardless of they have power or not/regardless if they will get caught or not? What level of “constraint” are historically powerful groups willing to accept? I am not sure of right balance/what is most effective yet; I have seen cases where some men were reluctant to receive education materials on consent, claiming they know, but they don’t. This is a very very small cost, but yet there are people who are not willing to even take that. This makes me frustrated.
Regardless, I do believe in the effectiveness of awareness, continuous education, law, and the need to foster a culture to cultivate humans that truly respect other lives.
Real life examples
I was not very aware of many related women’s suffering issues, until recently, from the Korean pop star group
to recent female doctor case (https://www.npr.org/sections/goats-and-soda/2024/08/26/g-s1-18366/rape-murder-doctor-india-sexual-violence-protests). I was believing in 2024, these are things humans should have been able to solve but not yet, but maybe I was in my own bubble. And organizationally, on the country level, I believe you would have seen many news on different wars now.
Thanks for sharing the two pages! Not sure if the above are clear or not, but I try to document all of my thoughts.