I saw someone wrote somewhere: will try their best in their life to address X risks to make sure future humans/generations will see this beautiful world. I respect that. However, many recent news and events make me also think about the current humans/people right now. Who are seeing this beautiful world right now? Whose beautiful world this is?
The people in war didn’t see it. It is not their beautiful world. The female doctor who got brutally gang raped while on duty didn’t see it. It is not their beautiful world. Are there also enough efforts to help these people, to see a beautiful and fair world? Is this really a beautiful world? Does this beautiful world belong to everyone?
Everyone including the next generation, and the next.
Getting AGI right is the odds-on way to help everyone, particularly those who are disadvantaged now.
You can disagree with the logic, but only if you understand the logic.
The utilitarian concern for everyone strongly implies that we should worry about the future. If you believe that AGI presents even a 1% x-risk, of humanity’s likely quadrillions of descendants not existing, or being born into an unbreakable dystopia, and that we’re near a tipping point, then the cold hard logic says that’s what we should all focus on. And most of us believe the risk is much higher than 1% - LWers might average around 50% chance we screw up AGI and all die.
Add to that the fact that the vast majority of humanity focuses much more on the problems you mention, and it makes more sense for those that see and believe in AGI as an immense force for good or bad to focus on making sure it’s good.
The possible rewards of empowering every human beyond the privilege of kings, and allowing quadrillions of joyous humans to live far into the future, are as immense as the risk of us being snuffed out in the next few decades.
That’s why we seem callous to the suffering of the underprivileged now. I don’t think we are.
PS - Shame on whoever downvoted this question. Suppressing dissent instead of explaining your logic to newcomers is not a good look for any community, nor good epistemics. This is a reasonable question to ask, and LW is supposed to be about asking and answering reasonable questions.
What I am sensing or seeing right now is in order to promote AGI risks and X risks, I am seeing people downplay/lower ranking the importance of the people who already cannot see this beautiful world. It does not feel logical to me, that because there are always future issues that affect everyone’s lives, the issues that cause some people to be already on the miserable side should not be addressed.
The problem here is that we are not have the same starting line for “everyone” now, and therefore progress towards saving everyone with future focus might not mean the same thing. I maybe should draw a graph to demonstrate my point. As opposed to only consider problems that concerns everyone, I think we should also focus a bit more on an inclusive finish line that is connected to current realities of not the same starting line. If this world continues to be this way, I also worry if the future generations would like it, or would want to be brought into this.
I understand the utilitarian intentions, but I myself also believe we could incorporate equalitarian views. And in fact, a mindset or rules promoting equality or along similar lines actually helps everyone. In many situations a human will be one of those people at some point in their life in some way. Maybe a person’s home suddenly became war zone. Maybe got disabled suddenly. Maybe experienced sexual assault for self or loved one. Establishing a good system to reduce these and prevent these helps human in the future as well. I would like to formalize this a bit more later.
Both views/also current vs future views should really joint forces, as opposed to exclude each other. There are many tasks that I see are shared such as social good mindsets and governance.
Some background about me; myself believe in looking into both, and believe in value in looking into both. It would be dangerous to focus on only one either way by promoting another, and gradually we overlook/go backwards on things that we have started.
I see your concern. I don’t think that people who are currently disadvantaged will remain behind in the expansion toward infinity. If the rich who create AGI control the future and choose to be dicks about it, all of the rest of us are screwed, not just the currently-poor.
If those who create AGI choose to use it for the common good, it will very quickly elevate the poor to equality with the current top .01%, in terms of educational opportunities. And they will be effectively wealthier than the top .01%.
That’s why I see working on AGI alignment (and the societal alignment sub-problem; trying to make sure the people who control aligned AGI/ASI aren’t total dicks or foolish about it so we die anyway) is by far the most likely thing we can do to make the world better for the disadvantaged.
Because we are not remotely on top of this shit, so there’s a very good chance we all get oblivion instead of paradise-on-earth. And all of us have finite time and energy to spend.
Do you think people are firmly aware of this in the first place? I would love to hear that the answer is yes.
On solutions—I am not sure yet on universal solution, as it is very dependent on each case. For some problems, some solutions would be around raising awareness and international cooperation on educating human who hurts other human, pushing for law reforms, and alternative sentencing. Solutions aside, I am not sure if I am seeing enough people even care about these. My power alone is not enough, that’s why we need to join force.
I am worried about how people would down vote on this on this platform. I don’t think worrying about long term is bad, or it should not be looked into, but at the same time, it should not be the only thing we care either. This worries me as “we should only work on long term X risks, but nothing about the people now, and any comments that seek to say otherwise is wrong” type of sentiment is what I am seeing more and more.
There is danger in overlooking current risks. Besides obvious reasons on we are ignoring current people, from the future perspective we would be missing the connection between the future and the present, and missing the opportunity to practice applying solutions to reality. And thanks for the link to effective altruism, and I am aware of the initiative/program after attending an introductory program. It feels to me it was merging a couple different directions, and somehow it felt like recent efforts were mostly on long term X risks. For its original goal, I am also not entirely sure if it considers fairness enough, despite an effort to consider scarcity. An EA concept I see some people not understanding enough is the marginal part—marginal dollar, and marginal labor, which should allow people to invest in a portfolio of things. Would welcome any recommended readings further on this.
I think LW users are well aware of the staggering suffering occurring in the world right now. Some other humans manage to not think about it. Utilitarians, or even those who’ve thought much about it, do.
EA forums is the branch of the community that primarily deals with solutions for present-day problems. I focus heavily on x-risk, as per my other comment.
Do you think people are firmly aware of this in the first place?
Well, which people? I wish more people cared, and I wish more people know about effective altruism.
It seems like in general most people don’t give a fuck. When I was younger and active in non-profit kinds of activities, it seemed like wherever you go, you meet the same people. Go to the local Amnesty International meeting, go to the local Greenpeace meeting, go to the local Esperanto speakers meeting, and you will probably find a few people repeatedly in all those places. And then go anywhere else, and you will find that most people have never heard about any of that.
So we have maybe 1% of population that is trying to solve all the world problems, and 99% who are mostly not even aware that any of this is happening (either the problems, or the attempted solutions).
That is okay to some degree. We need to also have people who fully focus on other things, someone who spends their energy trying to be a better heart surgeon, to build a self-driving car, etc. Some people’s attention is fully occupied raising their own kids, or improving their neighborhood. All of that is legit.
But I suspect that most people are doing none of that. They treat the entire world as “someone else’s problem”. The discuss fashion, or talk about how bored they are, or that “someone else” should fix all their first-world problems. If you know a way to change these people, please go ahead and try.
Among the “1% who care”, I think that anything political is going to be 100x more popular than anything not connected to politics. That doesn’t mean that most of them are doing something meaningful. Maybe 1 in 100 is actually reading some statistics and studying policy, and 99 in 100 are just chanting whatever slogan in currently most popular on twitter.
Sorry, there is no optimistic ending to this rant. I believe that people who (1) care and (2) are smart and competent, are in a very short supply. But I still think that among those, there are more who are about the law that those who care about AI alignment. Though both groups would benefit a lot from having more people, sufficiently caring and capable.
some solutions would be around raising awareness and international cooperation on educating human who hurts other human, pushing for law reforms, and alternative sentencing. Solutions aside, I am not sure if I am seeing enough people even care about these. My power alone is not enough, that’s why we need to join force.
I am skeptical about the awareness part. I think most people have some awareness that something bad is going on in Ukraine or Palestine or one of those places I am not thinking about; or that the educational system is fucked up; or that many people are still generally bad towards others.
But being aware is just a starting point. You need to educate yourself about the issue; there is a lot of misinformation out there, so you need to figure out the truth. That it already hard work and requires some intelligence. Then you need to overcome the temptation to join the loudest and most popular people, as opposed to those who actually do the hard work (and maybe therefore don’t have the extra time and energy to promote themselves). Again, most people fail at this filter. Then you need to dedicate some time for this cause, because there are many causes you could choose from, plus maybe you should focus on your own life and career. And only if you focus on the cause… then you meet the resistance of people who profit from the status quo, who sometimes are many, or have more power.
I think you should probably try to find some NGOs in your proximity that already work on the issues you care about. Then you can join an existing group instead of trying to create a new one, and you can learn from their expertise. Maybe try more than one, so that you can compare, and there is a smaller risk of getting involved in some political cult or something. I think that if you knock on some organization’s door and say “hey, I approve of your cause, I don’t have much experience, but tell me if there is anything I could help you with”, that could be a start, especially if you are willing to do some non-fun work. Some organizations have hundreds of online supporters, but when they need someone to bring the printed flyers from one building to another, fold the flyers, and put them in printed envelops, suddenly there are no volunteers, because this part is no fun. So if you volunteer to do that, you are (1) already helping a lot, and (2) this is how you meet other people who are willing to do the non-fun parts, and those are the ones to learn from.
It is a bit sad to see for a lot of humans, even the ones who are already caring about the world generally (which is already a privilege, because many may need to focus on being alive themselves), if something is not relatable, they don’t deeply care. When that is correlated with power, it is the worst.
The problems of the present are as old as history, as are attempts to remedy them. Attempts at a comprehensive solution range from conservative attitudes according to which the best that humans can do, is beat down the perennial evils whenever they inevitably re-arise; to various utopianisms that want to establish that beautiful and fair world forever.
Obviously Less Wrong has a particular focus on AI as something new and enormously consequential, with the potential to transform life itself, and what kind of knowledge and what kind of interventions are needed to get a good outcome. But there was at least one organization that drew upon the rationalist community and its sensibility and intellectual resources, in pursuit of utopian aims in the present. It was called Leverage Research, and from what people say, it didn’t get very far before internal problems compelled it to greatly scale back its aims. And that is quite normal for utopian movements. Mostly they will remain small and irrelevant; occasionally they will become larger and corrupt; very rarely, they will bequeath to the world some new idea which has value, becomes general knowledge, and does some good.
In years past, there were occasional surveys of Less Wrong, on topics that included political opinions. Two strains of thought that were represented, that come to mind, are progressivism and libertarianism. I mention these because these are ideologies which have adherents here, and which, in their different ways, have a kind of universal ideal by means of which they propose to solve problems in general. The progressive ideal is organized activism, the libertarian ideal is individual freedom.
Effective altruism was mentioned in other comments, as a rationalist-adjacent community which is more focused on problems of the present. There, it’s worth reflecting on the case of Sam Bankman-Fried and FTX. That was an audacious attempt to combine a particular moral ethos (effective altruism) with a particular political orientation (his family were influential Democrats) and a business plan worthy of the most ambitious tech tycoons (to move cryptocurrency out of its Wild West phase, and become one of a few regulator-approved crypto exchanges). But crypto turned out to be in a bubble, when it burst it removed the underpinnings of the whole FTX enterprise, and Bankman-Fried has gone from being celebrated as an EA philanthropist, to disowned as a crook who did immense reputational damage to the movement.
Regularly people come along who, in one way or other, want to found a broader ethos and agenda on top of Less Wrong rationalism. Just this week, @zhukeepa has been arguing for common ground between rationalism and major religious traditions (incidentally, this is far from the first time that someone, inside rationalism or outside, proposed a secular interpretation of the ethical and prophetic elements of religion). But I think most people here, when seeking a better world in ways unrelated to AI, approach that via some philosophy distinct from LW rationalism, such as effective altruism, progressivism, or libertarianism.
If you look into a bit more history on social justice/equality problems, you would see we have actually made many many progress (https://gcdd.org/images/Reports/us-social-movements-web-timeline.pdf), but not enough as the bar was so low. These also have made changes in our law. Before 1879, women cannot be lawyers (https://en.wikipedia.org/wiki/Timeline_of_women_lawyers_in_the_United_States). On war, I don’t have too much knowledge myself, so I will refrain from commenting for now. It is also my belief that we should not stop at attempt, but attempt is the first step (necessary but not sufficient), and they have pushed to real changes as history shown, but it will have to take piles of piles of work, before a significant change. Just because something is very hard to do, does not mean we should stop, nor there will not be a way (just like ensuring there is humanity in the future.) For example, we should not give up on helping people during war nor try to reduce wars in the first place, and we should not give up on preventing women being raped. In my opinion, this is in a way ensuring there is future, as human may very well be destroyed by other humans, or by mistakes by ourselves. (That’s also why in the AI safety case, governance is so important so that we consider the human piece.)
As you mentioned political party—it is interesting to see surveys happening here; a side track—I believe general equality problems such as “women can go to school”, is not dependent on political party. And something like “police should not kill a black person randomly” should not be supported just by blacks, but also other races (I am not black).
Historical factors: should definitely apply equity. Conditioning on history is important and corrective efforts are needed.
Is the desired outcome a human “necessity”?
The definition of necessity may be tricky, or even differ by culture. Generally in the US, if it is something like healthcare, or access to education, should move towards/apply equity.
A recent thought on AI racing—it may not lead to more intelligent models necessarily especially at a time when low hanging fruits are taken and now more advanced breakthroughs need to come from longer term exploration and research. But this also does not necessarily mean that AI racing (particularly on LLMs in this context, but I think generally too) is not something to be worried about. It may waste a lot of compute/resources to achieve only marginally better models. Additionally the worst side effect of AI racing to me is the potential negligence on safety mitigations, and lack of safety focused mindset/culture.
When thinking about deontology and consequentialism in application, it is useful to me to rate morality of actions based on intention, execution, and outcome. (Some cells are “na” as they are not really logical in real world scenarios.)
In reality, to me, it seems executed “some” intention matters (though I am not sure how much) the most when doing something bad, and executed to the best ability matters the most when doing something good.
It also seems useful to me, when we try to learn about applications of philosophy from law. (I am not an expert though in neither philosophy nor law, so these may contain errors.)
Intention to kill the person
Executed “some” intention
Killed the person
”Bad” level
Law
Yes
Yes
Yes
10
murder
Yes
Yes
No
8-10
as an example, attempted first-degree murder is punished by life in state prison (US, CA)
Yes
No
Yes
na
Yes
No
No
0-5
no law on this (I can imagine for reasons on “it’s hard to prove”) but personally, assuming multiple “episodes”, or just more time, this leads to murder and attempted murder later anyways; very rare a person can have this thought without executing it in reality.
No
Yes
Yes
na
No
Yes
No
na
No
No
Yes
0-5
typically not a crime, unless something like negligence
No
No
No
0
Intention save a person (limited decision time)
Executed intention to the best of ability
Saved the person
”Good” Level
Yes
Yes
Yes
10
Yes
Yes
No
10
Yes
No
Yes
na
Yes
No
No
0-5
No
Yes
Yes
na
No
Yes
No
na
No
No
Yes
0-5
No
No
No
0
Intention to do good
Executed intention to the best of personal ability1[1]
I’m not sure what work “to the best of personal ability” is doing here. If you execute to 95% of the best of personal ability, that seems to come to “no” in the chart and appears to count the same as doing nothing?
Or maybe does executing “to the best of personal ability” include considerations like “I don’t want to do that particular good very strongly and have other considerations to address, and that’s a fact about me that constrains my decisions, so anything I do about it at all is by definition to the best of my ability”?
The latter seems pretty weird, but it’s the only way I can make sense of “na” in the row “had intention, didn’t execute to the best of personal ability, did good”.
I think this is conditioning on one problem with one goal, but I haven’t thought about the other good collectively (more of a discussion on consequentialism).
For best of personal ability, I think the purpose is to distinguish what one can do personally, and what one can do to engage collaboratively/collectively, but I need to think through that better it seems, so that is a good question.
My reason on the na for “have intention, no execution/enough execution, did good” is: there is no action, so we cannot even infer correlation. An example is, I want to help A, but I didn’t do anything. A is saved anyways, by another person. So there is no action taken on my part.
(This should have been attached/replied to my first quick take, as opposed to a separate quick take.)
From someone’s DM to me:
“Regarding war, rape, and murder, it’s one thing to make them taboo, view them with disgust, and make them illegal. It’s another thing to want to see them disappear from the world entirely. They are side effects of human capacities that in other contexts we cherish, like the willingness to follow powerful leaders, to make sacrifices, to protect ourselves by fighting and punishing enemies who threaten or abuse us, and the ability to feel erotic excitement. This is why they keep happening.”
I am not sure if this person is 1) trying to promote “AI risks are more important than anything else” to emphasis its sole importance (based on other parts from this person’s DM) by promoting that other issues are not issues/dangerous (if so this is absolutely the wrong way, and also what I am worried about in the first place), or 2) truly believe in the above about “unavoidable side effects” of the abuse of power such as rape and see it as one sided erotic excitement based on sufferings from another by definition. They also comment that it is similar to having matches in homes without setting homes on fire—which is a poor analogy as matches are useful, but not rape intentions/intentions to harm other people on offense. I am not sure of this person’s conscious/unconscious motivation.
I especially hope this moral standard/level of awareness from this comment is not prevalent among our community—we will likely not going to have properly aligned AI, if we don’t care nor understand war, rape, nor murder, or view them as distractions to work on with any resources collectively. Not only we need to know they are wrong, so are AIs.
Finally, I would encourage posting these on the public comments, rather than DM. The person is aware of this. Be open and transparent.
Those are my words; I am the one accused of “low moral standards”, for refusing to embrace goals set by the poster.
I was going to say something in defense of myself, but I actually have other, humbler but real responsibilities, and in a moment of distraction just now, I almost made a mistake that would have diabolical consequences for the people depending on me.
Life is hard, it can be the effort of a lifetime just to save one other person, and you shouldn’t make such accusations so casually.
It is not really about you, it is about your views. You have reached out to me with a DM, to my initial post questioning whose beautiful world this is, after seeing/reflecting recent events about rape and various outrageous wars, and you seem to try to convince me those are not as important as AI safety.
It is not you do not embrace “my goals”, but seems but you believe , “working on addressing rape or war would be distractions to AI safety risks”.
You also do seem have a worrisome misinterpretation of what rape is. You have mentioned that “Since rape is a side effect of erotic excitement, and so removing rape would mean removing erotic excitement.” This claim is wrong. First of all rape is about power, and limited relation to erotic excitement. This view on “erotic excitement” is very perpetrator-centric. At best you could compare with murder, but I would argue it is worse than murder because the person enjoys humiliating another human. Removing rape does not mean removing erotic excitement.
And actually, if some of these “short term” issues are not worked on, these issues will likely be forever distractions/barriers, to these populations and family/friends of these population (at some point anyone could be a part of that population), on anything including their own lives, and their life goals (maybe AI safety).
[Edited: If people collectively believe this, there will be naturally more acceptance towards the harm one human does to another, because they would think “it is a part of the human nature”.]
I may have reached the point where I can crystallize my position into a few clear propositions:
First: I doubt the capacity of human beings using purely human methods like culture and the law, to 100% abolish any of these things. Reduce their frequency and their impact, yes.
Second: I believe the era in which the world is governed by “human beings using purely human methods” is rapidly coming to a close anyway, because of the rise of AI. Rather than the human condition being perfected in a triumph of good over evil, we are on track to be transformed and/or relegated to a minor role in the drama of being, in favor of new entities with quite different dispositions and concerns.
Third: I am not a sexologist or a criminologist, but I have serious doubts about an attempt to abolish rape, that wants to focus only on power and not on sexuality. I gather that this can be a good mindset for a recovering survivor, and any serious conversation on the topic of rape will necessarily touch on both power and sexuality anyway. But we’re talking abolition here, a condition that has never existed historically. As long as humanity contains beastly individuals who are simply indifferent to sympathy or seduction, and as long as some men and women drive enjoyment from playing out beastly scenarios, the ingredients will be there for rape to occur. That’s how it appears to me.
(Mentioned some of these in our chat, but allow me to be repetitive)
On first: I don’t think efforts to reduce rape or discrimination needs 100% abolition, but working towards that currently has huge return at this point of history. Humans should have self controls, and understand consents, and enforcing these would also solve the problem completely if that needs to achieve 100%. Education has snowball effect as well. Just because it is hard to achieve 100%, does not mean there should not be efforts, nor impossible at all. In fact, it is something that is rarely worked on alone; for example, one strategy might actually be to bring up education generally, or economic disparity, and during this process, teach people how to respect other people.
On second: We likely need good value system to align AI on, otherwise, the only alignment AI would know is probably not to overpower the most powerful human. But that does not seem to be the successful outcome of “aligned AI”. I think there are a few posts recently on this as well.
Third: I have seen many people having this confusion/mixed up: rape play/bdsm is consensual, and the definition of rape is non-consensual. Rape is purely about going against the person’s will. If you view it as murder it might be more comparable, but in this case, it is historically one group on to another group due to biological features and power differences that people cannot easily change on, though also a lot of men to men. In my view, it is worse than murder because it is extreme suffering, and that suffering will carry through the victims’ whole lives, and many may end with suicide anyways.
Otherwise, I am glad to see you thinking about these and open to discussion.
I honestly don’t really get your point. It seems like the person who messaged you basically made the important point in the first sentence, and then the rest of this seems to be you doing a bunch of psychologizing.
I of course lack lots of context here, but I feel like you were trying to make a point that stands on its own.
I don’t really see anything wrong with the original DM. My guess is I agree with this DM and think we should be very hesitant to apply arbitrarily strong incentives to even quite bad things, since they might be correlated or a necessary form of collateral damage with things that are really important with human flourishing.
This of course doesn’t imply that I (or the person who DMd you) “do not care about war, rape, nor murder”, that seems like a crazy inference that’s directly contradicted by the first sentence you quote.
I don’t see the need to remove human properties we cherish to remove/reduce murder nor rape. Maybe war. Rape especially, is never defensive. That is an extremely linear way of connecting these.
Particularly, it is the claim that rape is in any way remotely related to erotic excitement that I found outrageous. That is simply wrong, and is an extremely narrow and perpetrator centered way of thinking. Check out the link that is linked in my comment if you want to learn more on myths about rape.
For the do not care point—my personal belief is that if one believes that suffering risks should not be taking up resources to be worked on, even not oneself but also other people, I do not see too much of a difference on with not care from consequentialist point of view/simply practically, or the degree of caring is small/not big enough to agree on allocation of resources/attention, while in my initial message, I have already linked multiple cases recently that has happened. (The person mentioned that working on these are “dangerous distractions”, not for himself, “ourselves”, seems to say society as a whole.) Again, you do not need to emphasis the importance of AI risks by downplaying other social issues. That is not rational.
Finally, the thread post is not on the user, but on their views; it is a bit standalone to reflect on the danger of not having correct awareness or attention on these issues collectively, and how it is related to successful AI alignment as well.
On resource scarcity—I also find it difficult for humans to execute altruism without resources, and may be a privilege to even think about these (financially, educationally, health-wise, and information-wise. Would starting with something important but may or may not be the “most” important help? Sometimes I find the process of finding the “most” a bit less rational than people would expect/theorize.
Bias and self-preservation by human nature, but need correction
To expand from there, that is one thing that I am worried about. Our views/beliefs are by nature limited by our experiences, and if humans do not recognize or acknowledge that self-preserving natures, we will be claiming altruism but not actually doing them. It will also prevent us from establishing a really good system that incentivize people to do so.
To give an example, there was a manifold question (https://manifold.markets/Bayesian/would-you-rather-manifold-chooses?r=enlj), asking if manifold wants to “Push someone in front of a trolley to save 5 others (NO) -OR- Be the person someone else pushes in front of the trolley to save 5 others (YES)”. When I saw the stats before close, it was 92% (I chose YES, and lost, I would say my YES is with 85% confidence). While we propose we value people’s life equally, I see self-preservation persist. Then I see that could radiate to I prefer preservation of my family, friends, people you are more familiar with, etc.
Simple majority vs minority situation
This is similar with majority voting. When the population has 80% of people A, and 20% of people B, assuming equal vote/power for everyone, when there are issues that is in conflict between A and B, especially some sort of historical oppression of A towards B, then B’s concerns would rarely be addressed. (Hope humans nowadays are better than this, and more educated, but who knows).
Power
When power (social and economical) is added into the self-preservation, without a proper social contract, this will be even more messed-up. The decision maker of these problems will always the people with some sort of power.
This is true in reality, where the people who get to choose/plan what is the most important in reality and our society, are either the powerful people, or the population outnumbering others. Worse if both. Therefore, we see less passionate caring on minority issues, on a platform like this practicing rationality. With power, everything can go below it, including other people, policies, and law.
How do we create a world with max. freedom to people, with constraint on people do not harm other people, regardless of they have power or not/regardless if they will get caught or not? What level of “constraint” are historically powerful groups willing to accept? I am not sure of right balance/what is most effective yet; I have seen cases where some men were reluctant to receive education materials on consent, claiming they know, but they don’t. This is a very very small cost, but yet there are people who are not willing to even take that. This makes me frustrated.
Regardless, I do believe in the effectiveness of awareness, continuous education, law, and the need to foster a culture to cultivate humans that truly respect other lives.
Real life examples
I was not very aware of many related women’s suffering issues, until recently, from the Korean pop star group
I saw someone wrote somewhere: will try their best in their life to address X risks to make sure future humans/generations will see this beautiful world. I respect that. However, many recent news and events make me also think about the current humans/people right now. Who are seeing this beautiful world right now? Whose beautiful world this is?
The people in war didn’t see it. It is not their beautiful world. The female doctor who got brutally gang raped while on duty didn’t see it. It is not their beautiful world. Are there also enough efforts to help these people, to see a beautiful and fair world? Is this really a beautiful world? Does this beautiful world belong to everyone?
This beautiful world should belong to everyone.
Everyone including the next generation, and the next.
Getting AGI right is the odds-on way to help everyone, particularly those who are disadvantaged now.
You can disagree with the logic, but only if you understand the logic.
The utilitarian concern for everyone strongly implies that we should worry about the future. If you believe that AGI presents even a 1% x-risk, of humanity’s likely quadrillions of descendants not existing, or being born into an unbreakable dystopia, and that we’re near a tipping point, then the cold hard logic says that’s what we should all focus on. And most of us believe the risk is much higher than 1% - LWers might average around 50% chance we screw up AGI and all die.
Add to that the fact that the vast majority of humanity focuses much more on the problems you mention, and it makes more sense for those that see and believe in AGI as an immense force for good or bad to focus on making sure it’s good.
The possible rewards of empowering every human beyond the privilege of kings, and allowing quadrillions of joyous humans to live far into the future, are as immense as the risk of us being snuffed out in the next few decades.
That’s why we seem callous to the suffering of the underprivileged now. I don’t think we are.
PS - Shame on whoever downvoted this question. Suppressing dissent instead of explaining your logic to newcomers is not a good look for any community, nor good epistemics. This is a reasonable question to ask, and LW is supposed to be about asking and answering reasonable questions.
Thanks for the thoughtful comments first of all.
What I am sensing or seeing right now is in order to promote AGI risks and X risks, I am seeing people downplay/lower ranking the importance of the people who already cannot see this beautiful world. It does not feel logical to me, that because there are always future issues that affect everyone’s lives, the issues that cause some people to be already on the miserable side should not be addressed.
The problem here is that we are not have the same starting line for “everyone” now, and therefore progress towards saving everyone with future focus might not mean the same thing. I maybe should draw a graph to demonstrate my point. As opposed to only consider problems that concerns everyone, I think we should also focus a bit more on an inclusive finish line that is connected to current realities of not the same starting line. If this world continues to be this way, I also worry if the future generations would like it, or would want to be brought into this.
I understand the utilitarian intentions, but I myself also believe we could incorporate equalitarian views. And in fact, a mindset or rules promoting equality or along similar lines actually helps everyone. In many situations a human will be one of those people at some point in their life in some way. Maybe a person’s home suddenly became war zone. Maybe got disabled suddenly. Maybe experienced sexual assault for self or loved one. Establishing a good system to reduce these and prevent these helps human in the future as well. I would like to formalize this a bit more later.
Both views/also current vs future views should really joint forces, as opposed to exclude each other. There are many tasks that I see are shared such as social good mindsets and governance.
Some background about me; myself believe in looking into both, and believe in value in looking into both. It would be dangerous to focus on only one either way by promoting another, and gradually we overlook/go backwards on things that we have started.
I see your concern. I don’t think that people who are currently disadvantaged will remain behind in the expansion toward infinity. If the rich who create AGI control the future and choose to be dicks about it, all of the rest of us are screwed, not just the currently-poor.
If those who create AGI choose to use it for the common good, it will very quickly elevate the poor to equality with the current top .01%, in terms of educational opportunities. And they will be effectively wealthier than the top .01%.
That’s why I see working on AGI alignment (and the societal alignment sub-problem; trying to make sure the people who control aligned AGI/ASI aren’t total dicks or foolish about it so we die anyway) is by far the most likely thing we can do to make the world better for the disadvantaged.
Because we are not remotely on top of this shit, so there’s a very good chance we all get oblivion instead of paradise-on-earth. And all of us have finite time and energy to spend.
Do you have a specific proposal how to address those things?
Meanwhile, https://forum.effectivealtruism.org/ https://www.givewell.org/ etc.
Do you think people are firmly aware of this in the first place? I would love to hear that the answer is yes.
On solutions—I am not sure yet on universal solution, as it is very dependent on each case. For some problems, some solutions would be around raising awareness and international cooperation on educating human who hurts other human, pushing for law reforms, and alternative sentencing. Solutions aside, I am not sure if I am seeing enough people even care about these. My power alone is not enough, that’s why we need to join force.
I am worried about how people would down vote on this on this platform. I don’t think worrying about long term is bad, or it should not be looked into, but at the same time, it should not be the only thing we care either. This worries me as “we should only work on long term X risks, but nothing about the people now, and any comments that seek to say otherwise is wrong” type of sentiment is what I am seeing more and more.
There is danger in overlooking current risks. Besides obvious reasons on we are ignoring current people, from the future perspective we would be missing the connection between the future and the present, and missing the opportunity to practice applying solutions to reality. And thanks for the link to effective altruism, and I am aware of the initiative/program after attending an introductory program. It feels to me it was merging a couple different directions, and somehow it felt like recent efforts were mostly on long term X risks. For its original goal, I am also not entirely sure if it considers fairness enough, despite an effort to consider scarcity. An EA concept I see some people not understanding enough is the marginal part—marginal dollar, and marginal labor, which should allow people to invest in a portfolio of things. Would welcome any recommended readings further on this.
I think LW users are well aware of the staggering suffering occurring in the world right now. Some other humans manage to not think about it. Utilitarians, or even those who’ve thought much about it, do.
That’s good to hear. Any posts you have encountered that are good and mention these/solutions on these?
EA forums is the branch of the community that primarily deals with solutions for present-day problems. I focus heavily on x-risk, as per my other comment.
Well, which people? I wish more people cared, and I wish more people know about effective altruism.
It seems like in general most people don’t give a fuck. When I was younger and active in non-profit kinds of activities, it seemed like wherever you go, you meet the same people. Go to the local Amnesty International meeting, go to the local Greenpeace meeting, go to the local Esperanto speakers meeting, and you will probably find a few people repeatedly in all those places. And then go anywhere else, and you will find that most people have never heard about any of that.
So we have maybe 1% of population that is trying to solve all the world problems, and 99% who are mostly not even aware that any of this is happening (either the problems, or the attempted solutions).
That is okay to some degree. We need to also have people who fully focus on other things, someone who spends their energy trying to be a better heart surgeon, to build a self-driving car, etc. Some people’s attention is fully occupied raising their own kids, or improving their neighborhood. All of that is legit.
But I suspect that most people are doing none of that. They treat the entire world as “someone else’s problem”. The discuss fashion, or talk about how bored they are, or that “someone else” should fix all their first-world problems. If you know a way to change these people, please go ahead and try.
Among the “1% who care”, I think that anything political is going to be 100x more popular than anything not connected to politics. That doesn’t mean that most of them are doing something meaningful. Maybe 1 in 100 is actually reading some statistics and studying policy, and 99 in 100 are just chanting whatever slogan in currently most popular on twitter.
Sorry, there is no optimistic ending to this rant. I believe that people who (1) care and (2) are smart and competent, are in a very short supply. But I still think that among those, there are more who are about the law that those who care about AI alignment. Though both groups would benefit a lot from having more people, sufficiently caring and capable.
I am skeptical about the awareness part. I think most people have some awareness that something bad is going on in Ukraine or Palestine or one of those places I am not thinking about; or that the educational system is fucked up; or that many people are still generally bad towards others.
But being aware is just a starting point. You need to educate yourself about the issue; there is a lot of misinformation out there, so you need to figure out the truth. That it already hard work and requires some intelligence. Then you need to overcome the temptation to join the loudest and most popular people, as opposed to those who actually do the hard work (and maybe therefore don’t have the extra time and energy to promote themselves). Again, most people fail at this filter. Then you need to dedicate some time for this cause, because there are many causes you could choose from, plus maybe you should focus on your own life and career. And only if you focus on the cause… then you meet the resistance of people who profit from the status quo, who sometimes are many, or have more power.
I think you should probably try to find some NGOs in your proximity that already work on the issues you care about. Then you can join an existing group instead of trying to create a new one, and you can learn from their expertise. Maybe try more than one, so that you can compare, and there is a smaller risk of getting involved in some political cult or something. I think that if you knock on some organization’s door and say “hey, I approve of your cause, I don’t have much experience, but tell me if there is anything I could help you with”, that could be a start, especially if you are willing to do some non-fun work. Some organizations have hundreds of online supporters, but when they need someone to bring the printed flyers from one building to another, fold the flyers, and put them in printed envelops, suddenly there are no volunteers, because this part is no fun. So if you volunteer to do that, you are (1) already helping a lot, and (2) this is how you meet other people who are willing to do the non-fun parts, and those are the ones to learn from.
I meant people on this platform initially, but also good to reflect people generally, which is also where some of my concerns are.
I agree on attending NGOs, and if by you, you meant me. For awareness—most awareness does not involve detailed stories or images, and those events did leave a strong impression/emotion (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2827459/#:~:text=At%20the%20very%20least%20emotions,thought%20and%20away%20from%20others.).
It is a bit sad to see for a lot of humans, even the ones who are already caring about the world generally (which is already a privilege, because many may need to focus on being alive themselves), if something is not relatable, they don’t deeply care. When that is correlated with power, it is the worst.
The problems of the present are as old as history, as are attempts to remedy them. Attempts at a comprehensive solution range from conservative attitudes according to which the best that humans can do, is beat down the perennial evils whenever they inevitably re-arise; to various utopianisms that want to establish that beautiful and fair world forever.
Obviously Less Wrong has a particular focus on AI as something new and enormously consequential, with the potential to transform life itself, and what kind of knowledge and what kind of interventions are needed to get a good outcome. But there was at least one organization that drew upon the rationalist community and its sensibility and intellectual resources, in pursuit of utopian aims in the present. It was called Leverage Research, and from what people say, it didn’t get very far before internal problems compelled it to greatly scale back its aims. And that is quite normal for utopian movements. Mostly they will remain small and irrelevant; occasionally they will become larger and corrupt; very rarely, they will bequeath to the world some new idea which has value, becomes general knowledge, and does some good.
In years past, there were occasional surveys of Less Wrong, on topics that included political opinions. Two strains of thought that were represented, that come to mind, are progressivism and libertarianism. I mention these because these are ideologies which have adherents here, and which, in their different ways, have a kind of universal ideal by means of which they propose to solve problems in general. The progressive ideal is organized activism, the libertarian ideal is individual freedom.
Effective altruism was mentioned in other comments, as a rationalist-adjacent community which is more focused on problems of the present. There, it’s worth reflecting on the case of Sam Bankman-Fried and FTX. That was an audacious attempt to combine a particular moral ethos (effective altruism) with a particular political orientation (his family were influential Democrats) and a business plan worthy of the most ambitious tech tycoons (to move cryptocurrency out of its Wild West phase, and become one of a few regulator-approved crypto exchanges). But crypto turned out to be in a bubble, when it burst it removed the underpinnings of the whole FTX enterprise, and Bankman-Fried has gone from being celebrated as an EA philanthropist, to disowned as a crook who did immense reputational damage to the movement.
Regularly people come along who, in one way or other, want to found a broader ethos and agenda on top of Less Wrong rationalism. Just this week, @zhukeepa has been arguing for common ground between rationalism and major religious traditions (incidentally, this is far from the first time that someone, inside rationalism or outside, proposed a secular interpretation of the ethical and prophetic elements of religion). But I think most people here, when seeking a better world in ways unrelated to AI, approach that via some philosophy distinct from LW rationalism, such as effective altruism, progressivism, or libertarianism.
If you look into a bit more history on social justice/equality problems, you would see we have actually made many many progress (https://gcdd.org/images/Reports/us-social-movements-web-timeline.pdf), but not enough as the bar was so low. These also have made changes in our law. Before 1879, women cannot be lawyers (https://en.wikipedia.org/wiki/Timeline_of_women_lawyers_in_the_United_States). On war, I don’t have too much knowledge myself, so I will refrain from commenting for now. It is also my belief that we should not stop at attempt, but attempt is the first step (necessary but not sufficient), and they have pushed to real changes as history shown, but it will have to take piles of piles of work, before a significant change. Just because something is very hard to do, does not mean we should stop, nor there will not be a way (just like ensuring there is humanity in the future.) For example, we should not give up on helping people during war nor try to reduce wars in the first place, and we should not give up on preventing women being raped. In my opinion, this is in a way ensuring there is future, as human may very well be destroyed by other humans, or by mistakes by ourselves. (That’s also why in the AI safety case, governance is so important so that we consider the human piece.)
As you mentioned political party—it is interesting to see surveys happening here; a side track—I believe general equality problems such as “women can go to school”, is not dependent on political party. And something like “police should not kill a black person randomly” should not be supported just by blacks, but also other races (I am not black).
Thanks for the background otherwise.
Equity vs equality considerations (https://belonging.berkeley.edu/equity-vs-equality-whats-difference):
What caused the differences in outcome?
Historical factors: should definitely apply equity. Conditioning on history is important and corrective efforts are needed.
Is the desired outcome a human “necessity”?
The definition of necessity may be tricky, or even differ by culture. Generally in the US, if it is something like healthcare, or access to education, should move towards/apply equity.
A recent thought on AI racing—it may not lead to more intelligent models necessarily especially at a time when low hanging fruits are taken and now more advanced breakthroughs need to come from longer term exploration and research. But this also does not necessarily mean that AI racing (particularly on LLMs in this context, but I think generally too) is not something to be worried about. It may waste a lot of compute/resources to achieve only marginally better models. Additionally the worst side effect of AI racing to me is the potential negligence on safety mitigations, and lack of safety focused mindset/culture.
When thinking about deontology and consequentialism in application, it is useful to me to rate morality of actions based on intention, execution, and outcome. (Some cells are “na” as they are not really logical in real world scenarios.)
In reality, to me, it seems executed “some” intention matters (though I am not sure how much) the most when doing something bad, and executed to the best ability matters the most when doing something good.
It also seems useful to me, when we try to learn about applications of philosophy from law. (I am not an expert though in neither philosophy nor law, so these may contain errors.)
Possible to collaborate when there is enough time.
I’m not sure what work “
to the best of personal ability
” is doing here. If you execute to 95% of the best of personal ability, that seems to come to “no” in the chart and appears to count the same as doing nothing?Or maybe does executing “to the best of personal ability” include considerations like “I don’t want to do that particular good very strongly and have other considerations to address, and that’s a fact about me that constrains my decisions, so anything I do about it at all is by definition to the best of my ability”?
The latter seems pretty weird, but it’s the only way I can make sense of “na” in the row “had intention, didn’t execute to the best of personal ability, did good”.
I think this is conditioning on one problem with one goal, but I haven’t thought about the other good collectively (more of a discussion on consequentialism).
For best of personal ability, I think the purpose is to distinguish what one can do personally, and what one can do to engage collaboratively/collectively, but I need to think through that better it seems, so that is a good question.
My reason on the na for “have intention, no execution/enough execution, did good” is: there is no action, so we cannot even infer correlation. An example is, I want to help A, but I didn’t do anything. A is saved anyways, by another person. So there is no action taken on my part.
(This should have been attached/replied to my first quick take, as opposed to a separate quick take.)
From someone’s DM to me:
“Regarding war, rape, and murder, it’s one thing to make them taboo, view them with disgust, and make them illegal. It’s another thing to want to see them disappear from the world entirely. They are side effects of human capacities that in other contexts we cherish, like the willingness to follow powerful leaders, to make sacrifices, to protect ourselves by fighting and punishing enemies who threaten or abuse us, and the ability to feel erotic excitement. This is why they keep happening.”
I am seeing—“side effects”, “rape → erotic excitement”—by going against another human’s will by definition. I guess I have to explain why this is not the case in the first place by linking https://valleycrisiscenter.org/sexual-assault/myths-about-sexual-assault/#:~:text=Reality%3A%20Rape%20is%20not%20about,him%20accountable%20for%20his%20actions. in 2024.
I am not sure if this person is 1) trying to promote “AI risks are more important than anything else” to emphasis its sole importance (based on other parts from this person’s DM) by promoting that other issues are not issues/dangerous (if so this is absolutely the wrong way, and also what I am worried about in the first place), or 2) truly believe in the above about “unavoidable side effects” of the abuse of power such as rape and see it as one sided erotic excitement based on sufferings from another by definition. They also comment that it is similar to having matches in homes without setting homes on fire—which is a poor analogy as matches are useful, but not rape intentions/intentions to harm other people on offense. I am not sure of this person’s conscious/unconscious motivation.
I especially hope this moral standard/level of awareness from this comment is not prevalent among our community—we will likely not going to have properly aligned AI, if we don’t care nor understand war, rape, nor murder, or view them as distractions to work on with any resources collectively. Not only we need to know they are wrong, so are AIs.
Finally, I would encourage posting these on the public comments, rather than DM. The person is aware of this. Be open and transparent.
[edited]
Those are my words; I am the one accused of “low moral standards”, for refusing to embrace goals set by the poster.
I was going to say something in defense of myself, but I actually have other, humbler but real responsibilities, and in a moment of distraction just now, I almost made a mistake that would have diabolical consequences for the people depending on me.
Life is hard, it can be the effort of a lifetime just to save one other person, and you shouldn’t make such accusations so casually.
It is not really about you, it is about your views. You have reached out to me with a DM, to my initial post questioning whose beautiful world this is, after seeing/reflecting recent events about rape and various outrageous wars, and you seem to try to convince me those are not as important as AI safety.
It is not you do not embrace “my goals”, but seems but you believe , “working on addressing rape or war would be distractions to AI safety risks”.
You also do seem have a worrisome misinterpretation of what rape is. You have mentioned that “Since rape is a side effect of erotic excitement, and so removing rape would mean removing erotic excitement.” This claim is wrong. First of all rape is about power, and limited relation to erotic excitement. This view on “erotic excitement” is very perpetrator-centric. At best you could compare with murder, but I would argue it is worse than murder because the person enjoys humiliating another human. Removing rape does not mean removing erotic excitement.
And actually, if some of these “short term” issues are not worked on, these issues will likely be forever distractions/barriers, to these populations and family/friends of these population (at some point anyone could be a part of that population), on anything including their own lives, and their life goals (maybe AI safety).
[Edited: If people collectively believe this, there will be naturally more acceptance towards the harm one human does to another, because they would think “it is a part of the human nature”.]
I may have reached the point where I can crystallize my position into a few clear propositions:
First: I doubt the capacity of human beings using purely human methods like culture and the law, to 100% abolish any of these things. Reduce their frequency and their impact, yes.
Second: I believe the era in which the world is governed by “human beings using purely human methods” is rapidly coming to a close anyway, because of the rise of AI. Rather than the human condition being perfected in a triumph of good over evil, we are on track to be transformed and/or relegated to a minor role in the drama of being, in favor of new entities with quite different dispositions and concerns.
Third: I am not a sexologist or a criminologist, but I have serious doubts about an attempt to abolish rape, that wants to focus only on power and not on sexuality. I gather that this can be a good mindset for a recovering survivor, and any serious conversation on the topic of rape will necessarily touch on both power and sexuality anyway. But we’re talking abolition here, a condition that has never existed historically. As long as humanity contains beastly individuals who are simply indifferent to sympathy or seduction, and as long as some men and women drive enjoyment from playing out beastly scenarios, the ingredients will be there for rape to occur. That’s how it appears to me.
(Mentioned some of these in our chat, but allow me to be repetitive)
On first: I don’t think efforts to reduce rape or discrimination needs 100% abolition, but working towards that currently has huge return at this point of history. Humans should have self controls, and understand consents, and enforcing these would also solve the problem completely if that needs to achieve 100%. Education has snowball effect as well. Just because it is hard to achieve 100%, does not mean there should not be efforts, nor impossible at all. In fact, it is something that is rarely worked on alone; for example, one strategy might actually be to bring up education generally, or economic disparity, and during this process, teach people how to respect other people.
On second: We likely need good value system to align AI on, otherwise, the only alignment AI would know is probably not to overpower the most powerful human. But that does not seem to be the successful outcome of “aligned AI”. I think there are a few posts recently on this as well.
Third: I have seen many people having this confusion/mixed up: rape play/bdsm is consensual, and the definition of rape is non-consensual. Rape is purely about going against the person’s will. If you view it as murder it might be more comparable, but in this case, it is historically one group on to another group due to biological features and power differences that people cannot easily change on, though also a lot of men to men. In my view, it is worse than murder because it is extreme suffering, and that suffering will carry through the victims’ whole lives, and many may end with suicide anyways.
Otherwise, I am glad to see you thinking about these and open to discussion.
I honestly don’t really get your point. It seems like the person who messaged you basically made the important point in the first sentence, and then the rest of this seems to be you doing a bunch of psychologizing.
I of course lack lots of context here, but I feel like you were trying to make a point that stands on its own.
I don’t really see anything wrong with the original DM. My guess is I agree with this DM and think we should be very hesitant to apply arbitrarily strong incentives to even quite bad things, since they might be correlated or a necessary form of collateral damage with things that are really important with human flourishing.
This of course doesn’t imply that I (or the person who DMd you) “do not care about war, rape, nor murder”, that seems like a crazy inference that’s directly contradicted by the first sentence you quote.
Thanks for commenting.
I don’t see the need to remove human properties we cherish to remove/reduce murder nor rape. Maybe war. Rape especially, is never defensive. That is an extremely linear way of connecting these.
Particularly, it is the claim that rape is in any way remotely related to erotic excitement that I found outrageous. That is simply wrong, and is an extremely narrow and perpetrator centered way of thinking. Check out the link that is linked in my comment if you want to learn more on myths about rape.
For the do not care point—my personal belief is that if one believes that suffering risks should not be taking up resources to be worked on, even not oneself but also other people, I do not see too much of a difference on with not care from consequentialist point of view/simply practically, or the degree of caring is small/not big enough to agree on allocation of resources/attention, while in my initial message, I have already linked multiple cases recently that has happened. (The person mentioned that working on these are “dangerous distractions”, not for himself, “ourselves”, seems to say society as a whole.) Again, you do not need to emphasis the importance of AI risks by downplaying other social issues. That is not rational.
Finally, the thread post is not on the user, but on their views; it is a bit standalone to reflect on the danger of not having correct awareness or attention on these issues collectively, and how it is related to successful AI alignment as well.
Relatedly on my first reply to the person’s DM:
On resource scarcity—I also find it difficult for humans to execute altruism without resources, and may be a privilege to even think about these (financially, educationally, health-wise, and information-wise. Would starting with something important but may or may not be the “most” important help? Sometimes I find the process of finding the “most” a bit less rational than people would expect/theorize.
Bias and self-preservation by human nature, but need correction
To expand from there, that is one thing that I am worried about. Our views/beliefs are by nature limited by our experiences, and if humans do not recognize or acknowledge that self-preserving natures, we will be claiming altruism but not actually doing them. It will also prevent us from establishing a really good system that incentivize people to do so.
To give an example, there was a manifold question (https://manifold.markets/Bayesian/would-you-rather-manifold-chooses?r=enlj), asking if manifold wants to “Push someone in front of a trolley to save 5 others (NO) -OR- Be the person someone else pushes in front of the trolley to save 5 others (YES)”. When I saw the stats before close, it was 92% (I chose YES, and lost, I would say my YES is with 85% confidence). While we propose we value people’s life equally, I see self-preservation persist. Then I see that could radiate to I prefer preservation of my family, friends, people you are more familiar with, etc.
Simple majority vs minority situation
This is similar with majority voting. When the population has 80% of people A, and 20% of people B, assuming equal vote/power for everyone, when there are issues that is in conflict between A and B, especially some sort of historical oppression of A towards B, then B’s concerns would rarely be addressed. (Hope humans nowadays are better than this, and more educated, but who knows).
Power
When power (social and economical) is added into the self-preservation, without a proper social contract, this will be even more messed-up. The decision maker of these problems will always the people with some sort of power.
This is true in reality, where the people who get to choose/plan what is the most important in reality and our society, are either the powerful people, or the population outnumbering others. Worse if both. Therefore, we see less passionate caring on minority issues, on a platform like this practicing rationality. With power, everything can go below it, including other people, policies, and law.
How do we create a world with max. freedom to people, with constraint on people do not harm other people, regardless of they have power or not/regardless if they will get caught or not? What level of “constraint” are historically powerful groups willing to accept? I am not sure of right balance/what is most effective yet; I have seen cases where some men were reluctant to receive education materials on consent, claiming they know, but they don’t. This is a very very small cost, but yet there are people who are not willing to even take that. This makes me frustrated.
Regardless, I do believe in the effectiveness of awareness, continuous education, law, and the need to foster a culture to cultivate humans that truly respect other lives.
Real life examples
I was not very aware of many related women’s suffering issues, until recently, from the Korean pop star group
to recent female doctor case (https://www.npr.org/sections/goats-and-soda/2024/08/26/g-s1-18366/rape-murder-doctor-india-sexual-violence-protests). I was believing in 2024, these are things humans should have been able to solve but not yet, but maybe I was in my own bubble. And organizationally, on the country level, I believe you would have seen many news on different wars now.
Thanks for sharing the two pages! Not sure if the above are clear or not, but I try to document all of my thoughts.