it’s wrong to try to control people or stop them from doing locally self-interested & non-violent things in the interest of “humanity’s future”, in part because this is so futile.
if the only way we survive is if we coerce people to make a costly and painful investment in a speculative idea that might not even work, then we don’t survive! you do not put people through real pain today for a “someday maybe!” This applies to climate change, AI x-risk, and socially-conservative cultural reform.
most cultures and societies in human history have been so bad, by my present values, that I’m not sure they’re not worse than extinction, and we should expect that most possible future states are similarly bad;
history clearly teaches us that civilizations and states collapse (on timescales of centuries) and the way to bet is that ours will as well, but it’s kind of insane hubris to think that this can be prevented;
the literal species Homo sapiens is pretty resilient and might avoid extinction for a very long time, but have you MET Homo sapiens? this is cold fucking comfort! (see e.g. C. J. Cherryh’s vision in 40,000 in Gehenna for a fictional representation not far from my true beliefs — we are excellent at adaptation and survival but when we “survive” this often involves unimaginable harshness and cruelty, and changing into something that our ancestors would not have liked at all.)
identifying with species-survival instead of with the stuff we value now is popular among the thoughtful but doesn’t make any sense to me;
in general it does not make sense, to me, to compromise on personal values in order to have more power/influence. you will be able to cause stuff to happen, but who cares if it’s not the stuff you want?
similarly, it does not make sense to consciously optimize for having lots of long-term descendants. I love my children; I expect they’ll love their children; but go too many generations out and it’s straight-up fantasyland. My great-grandparents would have hated me. And that’s still a lot of shared culture and values! Do you really have that much in common with anyone from five thousand years ago?
Evolution is not your friend. God is not your friend. Everything worth loving will almost certainly perish. Did you expect it to last forever?
“I love whatever is best at surviving” or “I love whatever is strongest” means you don’t actually care what it’s like. It means you have no loyalty and no standards. It means you don’t care so much if the way things turn out is hideous, brutal, miserable, abusive… so long as it technically “is alive” or “wins”. Fuck that.
I despise sour grapes. If the thing I want isn’t available, I’m not going to pretend that what is available is what I want.
I am not going to embrace the “realistic” plan of allying with something detestable but potent. There is always an alternative, even if the only alternative is “stay true to your dreams and then get clobbered.”
it’s wrong to try to control people or stop them from doing locally self-interested & non-violent things in the interest of “humanity’s future”, in part because this is so futile.
if the only way we survive is if we coerce people to make a costly and painful investment in a speculative idea that might not even work, then we don’t survive! you do not put people through real pain today for a “someday maybe!” This applies to climate change, AI x-risk, and socially-conservative cultural reform.
How does “this is so futile” square with the massive success of taxes and criminal justice? From what I’ve heard, states have managed to reduce murder rates by 50x. Obviously that’s stopping people from something violent rather than non-violent, but what’s the aspect of violence that makes it relevant? Or e.g. how about taxes which fund change to renewable energy? The main argument for socially-conservative cultural reform is fertility, but what about taxes that fund kindergartens, they sort of seem to have a similar function?
The key trick to make it correct to try to control people or stop them is to be stronger than them.
I think this prompts some kind of directional update in me. My paraphrase of this is:
it’s actually pretty ridiculous to think you can steer the future
It’s also pretty ridiculous to choose to identify with what the future is likely to be.
Therefore…. Well, you don’t spell out your answer. My answer is “I should have a personal meaning-making resolution to ‘what would I do if those two things are both true,’ even if one of them turns out to be false, so that I can think clearly about whether they are true.”
I’ve done a fair amount of similar meaningmaking work through the lens of Solstice 2022 and 2023. But that was more through lens of ‘nearterm extinction’ than ‘inevitability of value loss’, which does feel like a notably different thing.
So it seems worth doing some thinking and pre-grieving about that.
I of course have some answers to ‘why value loss might not be inevitable’, but it’s not something I’ve yet thought about through an unclouded lens.
Therefore, do things you’d be in favor of having done even if the future will definitely suck. Things that are good today, next year, fifty years from now… but not like “institute theocracy to raise birth rates”, which is awful today even if you think it might “save the world”.
I honestly feel that the only appropriate response is something along the lines of “fuck defeatism”[1].
This comment isn’t targeted at you, but at a particular attractor in thought space.
Let me try to explain why I think rejecting this attractor is the right response rather than engaging with it.
I think it’s mostly that I don’t think that talking about things at this level of abstraction is useful. It feels much more productive to talk about specific plans. And if you have a general, high-abstraction argument that plans in general are useless, but I have a specific argument why a specific plan is useful, I know which one I’d go with :-).
Don’t get me wrong, I think that if someone struggles for a certain amount of time to try to make a difference and just hits wall after wall, then at some point they have to call it. But “never start” and “don’t even try” are completely different.
It’s also worth noting, that saving the world is a team sport. It’s okay to pursue a plan that depends on a bunch of other folk stepping up and playing their part.
What about influencing? If, in order for things to go OK, human civilization must follow a narrow path which I individually need to steer us down, we’re 100% screwed because I can’t do that. But I do have some influence. A great deal of influence over my own actions (I’m resisting the temptation to go down a sidetrack about determinism, assuming you’re modeling humans as things that can make meaningful choices), substantial influence over the actions of those close to me, some influence over my acquaintances, and so on until very extremely little (but not 0) influence over humanity as a whole. I also note that you use the word “we”, but I don’t know who the “we” is. Is it everyone? If so, then everyone collectively has a great deal of say about how the future will go, if we collectively can coordinate. Admittedly, we’re not very good at this right now, but there are paths to developing this civilizational skill further than we currently have. So maybe the answer to “we can’t steer the future” is “not yet we can’t, at least not very well”?
it’s wrong to try to control people or stop them from doing locally self-interested & non-violent things in the interest of “humanity’s future”, in part because this is so futile.
if the only way we survive is if we coerce people to make a costly and painful investment in a speculative idea that might not even work, then we don’t survive! you do not put people through real pain today for a “someday maybe!” This applies to climate change, AI x-risk, and socially-conservative cultural reform.
Agree, mostly. The steering I would aim for would be setting up systems wherein locally self-interested and non-violent things people are incentivized to do have positive effects for humanity’s future. In other words, setting up society such that individual and humanity-wide effects are in the same direction with respect to some notion of “goodness”, rather than individual actions harming the group, or group actions harming or stifling the individual. We live in a society where we can collectively decide the rules of the game, which is a way of “steering” a group. I believe we should settle on a ruleset where individual short-term moves that seem good lead to collective long-term outcomes that seem good. Individual short-term moves that clearly lead to bad collective long-term outcomes should be disincentivized, and if the effects are bad enough then coercive prevention does seem warranted (E. G., a SWAT team to prevent a mass shooting). And similarly for groups stifling individuals ability to do things that seem to them to be good for them in the short term. And rules that have perverse incentive effects that are harmful to the individual, the group, or both? Definitely out. This type of system design is like a haiku—very restricted in what design choices are permissible, but not impossible in principle. Seems worth trying because if successful, everything is good with no coercion. If even a tiny subsystem can be designed (or the current design tweaked) in this way, that by itself is good. And the right local/individual move to influence the systems of which you are a part towards that state, as a cognitively-limited individual who can’t hold the whole of complex systems in their mind and accurately predict the effect of proposed changes out into the far future, might be as simple as saying “in this instance, you’re stifling the individual” and “in this instance you’re harming the group/long-term future” wherever you see it, until eventually you get a system that does neither. Like arriving at a haiku by pointing out every time the rules of haiku construction are violated.
I disagree a lot! Many things have gotten better! Is sufferage, abolition, democracy, property rights etc not significant? All the random stuff eg better angels of our nature claims has gotten better.
Either things have improved in the past or they haven’t, and either people trying to “steer the future” in some sense have been influential on these improvements. I think things have improved, and I think there’s definitely not strong evidence that people trying to steer the future was always useless. Because trying to steer the future is very important and motivating, i try to do it.
Yes the counterfactual impact of you individually trying to steer the future may or may not be insignificant, but people trying to steer the future is better than no one doing that!
“I love whatever is best at surviving” or “I love whatever is strongest” means you don’t actually care what it’s like. It means you have no loyalty and no standards. It means you don’t care so much if the way things turn out is hideous, brutal, miserable, abusive… so long as it technically “is alive” or “wins”. Fuck that.
Proposal: For any given system, there’s a destiny based on what happens when it’s developed to its full extent. Sight is an example of this, where both human eyes and octopus eyes and cameras have ended up using lenses to steer light, despite being independent developments.
“I love whatever is the destiny” is, as you say, no loyalty and no standards. But, you can try to learn what the destiny is, and then on the basis of that decide whether to love or oppose it.
Plants and solar panels are the natural destiny for earthly solar energy. Do you like solarpunk? If so, good news, you can love the destiny, not because you love whatever is the destiny, but because your standards align with the destiny.
1) Regarding tiling the universy with computronium as destiny is Gnostic heresy.
2) I would like to learn more about the ecology of space infrastructure. Intuitively it seems to me like the Earth is much more habitable than anywhere else, and so I would expect sarah’s “this is so futile” point to actually be inverted when it comes to e.g. a Dyson sphere, where the stagnation-inducing worldwide regulation regulation will by-default be stronger than the entropic pressure.
More generally, I have a concept I call the “infinite world approximation”, which I think held until ~WWI. Under this approximation, your methods have to be robust against arbitrary adversaries, because they could invade from parts of the ecology you know nothing about. However, this approximation fails for Earth-scale phenomena, since Earth-scale organizations could shoot down any attempt at space colonization.
I would more say the opposite: Henri Bergson (better known for inventing vitalism) convinced me that there ought to be a simple explanation for the forms life takes, and so I spent a while performing root cause analysis on that, and ended up with the sun as the creator.
This post reads like it’s trying to express an attitude or put forward a narrative frame, rather than trying to describe the world.
Many of these claims seem obviously false, if I take them at face value at take a moment to consider what they’re claiming and whether it’s true.
e.g., On the first two bullet points it’s easy to come up with counterexamples. Some successful attempts to steer the future, by stopping people from doing locally self-interested & non-violent things, include: patent law (“To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries”) and banning lead in gasoline. As well as some others that I now see that other commenters have mentioned.
history clearly teaches us that civilizations and states collapse (on timescales of centuries) and the way to bet is that ours will as well, but it’s kind of insane hubris to think that this can be prevented;
It seems like it makes some difference whether our civilization collapses the way that the Roman Empire collapsed, the way that the British Empire collapsed, or the way that the Soviet Union collapsed. “We must prevent our civilization from ever collapsing” is clearly an implausible goal, but “we should ensure that a successor structure exists and is not much worse than what we have now” seems rather more reasonable, no?
I don’t think it was articulated quite right—it’s more negative than my overall stance (I wrote it when unhappy) and a little too short-termist.
I do still believe that the future is unpredictable, that we should not try to “constrain” or “bind” all of humanity forever using authoritarian means, and that there are many many fates worse than death and we should not destroy everything we love for “brute” survival.
And, also, I feel that transience is normal and only a bit sad. It’s good to save lives, but mortality is pretty “priced in” to my sense of how the world works. It’s good to work on things that you hope will live beyond you, but Dark Ages and collapses are similarly “priced in” as normal for me. Sara Teasdale: “You say there is no love, my love, unless it lasts for aye; Ah folly, there are episodes far better than the play!” If our days are as a passing shadow, that’s not that bad; we’re used to it.
I worry that people who are not ok with transience may turn themselves into monsters so they can still “win”—even though the meaning of “winning” is so changed it isn’t worth it any more.
I do think this comes back to the messages in On Green and also why the post went down like a cup of cold sick—rationality is about winning. Obviously nobody on LW wants to “win” in the sense you describe, but more winning over more harmony on the margin, I think.
The future will probably contain less of the way of life I value (or something entirely orthogonal), but then that’s the nature of things.
I have been having some similar thoughts on the main points here for a while and thanks for this.
I guess to me what needs attention is when people do things along the lines of “benefit themselves and harm other people”. That harm has a pretty strict definition, though I know we may always be able to give borderline examples. This definitely includes the abuse of power in our current society and culture, and any current risks etc. (For example, if we are constraining to just AI with warning on content, https://www.iwf.org.uk/media/q4zll2ya/iwf-ai-csam-report_public-oct23v1.pdf. And this is very sad to see.) On the other hand, with regards to climate change (can also be current too) or AI risks, it probably should also be concerned when corporates or developers neglect known risks or pursue science/development irresponsibly. I think it is not wrong to work on these, but I just don’t believe in “do not solve the other current risks and only work on future risks.”
On some comments that were saying our society is “getting better”—sure, but the baseline is a very low bar (slavery for example). There are still many, many, many examples in different societies of how things are still very systematically messed up.
You seem to dislike reality. Could it not be that the worldview which clashes with reality is wrong (or rather, in the wrong), rather than reality being wrong/in the wrong? For instance that “nothing is forever” isn’t a design flaw, but one of the required properties that a universe must have in order to support life?
“we” can’t steer the future.
it’s wrong to try to control people or stop them from doing locally self-interested & non-violent things in the interest of “humanity’s future”, in part because this is so futile.
if the only way we survive is if we coerce people to make a costly and painful investment in a speculative idea that might not even work, then we don’t survive! you do not put people through real pain today for a “someday maybe!” This applies to climate change, AI x-risk, and socially-conservative cultural reform.
most cultures and societies in human history have been so bad, by my present values, that I’m not sure they’re not worse than extinction, and we should expect that most possible future states are similarly bad;
history clearly teaches us that civilizations and states collapse (on timescales of centuries) and the way to bet is that ours will as well, but it’s kind of insane hubris to think that this can be prevented;
the literal species Homo sapiens is pretty resilient and might avoid extinction for a very long time, but have you MET Homo sapiens? this is cold fucking comfort! (see e.g. C. J. Cherryh’s vision in 40,000 in Gehenna for a fictional representation not far from my true beliefs — we are excellent at adaptation and survival but when we “survive” this often involves unimaginable harshness and cruelty, and changing into something that our ancestors would not have liked at all.)
identifying with species-survival instead of with the stuff we value now is popular among the thoughtful but doesn’t make any sense to me;
in general it does not make sense, to me, to compromise on personal values in order to have more power/influence. you will be able to cause stuff to happen, but who cares if it’s not the stuff you want?
similarly, it does not make sense to consciously optimize for having lots of long-term descendants. I love my children; I expect they’ll love their children; but go too many generations out and it’s straight-up fantasyland. My great-grandparents would have hated me. And that’s still a lot of shared culture and values! Do you really have that much in common with anyone from five thousand years ago?
Evolution is not your friend. God is not your friend. Everything worth loving will almost certainly perish. Did you expect it to last forever?
“I love whatever is best at surviving” or “I love whatever is strongest” means you don’t actually care what it’s like. It means you have no loyalty and no standards. It means you don’t care so much if the way things turn out is hideous, brutal, miserable, abusive… so long as it technically “is alive” or “wins”. Fuck that.
I despise sour grapes. If the thing I want isn’t available, I’m not going to pretend that what is available is what I want.
I am not going to embrace the “realistic” plan of allying with something detestable but potent. There is always an alternative, even if the only alternative is “stay true to your dreams and then get clobbered.”
Link to this on my Roam
How does “this is so futile” square with the massive success of taxes and criminal justice? From what I’ve heard, states have managed to reduce murder rates by 50x. Obviously that’s stopping people from something violent rather than non-violent, but what’s the aspect of violence that makes it relevant? Or e.g. how about taxes which fund change to renewable energy? The main argument for socially-conservative cultural reform is fertility, but what about taxes that fund kindergartens, they sort of seem to have a similar function?
The key trick to make it correct to try to control people or stop them is to be stronger than them.
I think this prompts some kind of directional update in me. My paraphrase of this is:
it’s actually pretty ridiculous to think you can steer the future
It’s also pretty ridiculous to choose to identify with what the future is likely to be.
Therefore…. Well, you don’t spell out your answer. My answer is “I should have a personal meaning-making resolution to ‘what would I do if those two things are both true,’ even if one of them turns out to be false, so that I can think clearly about whether they are true.”
I’ve done a fair amount of similar meaningmaking work through the lens of Solstice 2022 and 2023. But that was more through lens of ‘nearterm extinction’ than ‘inevitability of value loss’, which does feel like a notably different thing.
So it seems worth doing some thinking and pre-grieving about that.
I of course have some answers to ‘why value loss might not be inevitable’, but it’s not something I’ve yet thought about through an unclouded lens.
Therefore, do things you’d be in favor of having done even if the future will definitely suck. Things that are good today, next year, fifty years from now… but not like “institute theocracy to raise birth rates”, which is awful today even if you think it might “save the world”.
Ah yeah that’s a much more specific takeaway than I’d been imagining.
I honestly feel that the only appropriate response is something along the lines of “fuck defeatism”[1].
This comment isn’t targeted at you, but at a particular attractor in thought space.
Let me try to explain why I think rejecting this attractor is the right response rather than engaging with it.
I think it’s mostly that I don’t think that talking about things at this level of abstraction is useful. It feels much more productive to talk about specific plans. And if you have a general, high-abstraction argument that plans in general are useless, but I have a specific argument why a specific plan is useful, I know which one I’d go with :-).
Don’t get me wrong, I think that if someone struggles for a certain amount of time to try to make a difference and just hits wall after wall, then at some point they have to call it. But “never start” and “don’t even try” are completely different.
It’s also worth noting, that saving the world is a team sport. It’s okay to pursue a plan that depends on a bunch of other folk stepping up and playing their part.
I would also suggest that this is the best way to respond to depression rather than “trying to argue your way out of it”.
I’m not defeatist! I’m picky.
And I’m not talking specifics because i don’t want to provoke argument.
What about influencing? If, in order for things to go OK, human civilization must follow a narrow path which I individually need to steer us down, we’re 100% screwed because I can’t do that. But I do have some influence. A great deal of influence over my own actions (I’m resisting the temptation to go down a sidetrack about determinism, assuming you’re modeling humans as things that can make meaningful choices), substantial influence over the actions of those close to me, some influence over my acquaintances, and so on until very extremely little (but not 0) influence over humanity as a whole. I also note that you use the word “we”, but I don’t know who the “we” is. Is it everyone? If so, then everyone collectively has a great deal of say about how the future will go, if we collectively can coordinate. Admittedly, we’re not very good at this right now, but there are paths to developing this civilizational skill further than we currently have. So maybe the answer to “we can’t steer the future” is “not yet we can’t, at least not very well”?
Agree, mostly. The steering I would aim for would be setting up systems wherein locally self-interested and non-violent things people are incentivized to do have positive effects for humanity’s future. In other words, setting up society such that individual and humanity-wide effects are in the same direction with respect to some notion of “goodness”, rather than individual actions harming the group, or group actions harming or stifling the individual. We live in a society where we can collectively decide the rules of the game, which is a way of “steering” a group. I believe we should settle on a ruleset where individual short-term moves that seem good lead to collective long-term outcomes that seem good. Individual short-term moves that clearly lead to bad collective long-term outcomes should be disincentivized, and if the effects are bad enough then coercive prevention does seem warranted (E. G., a SWAT team to prevent a mass shooting). And similarly for groups stifling individuals ability to do things that seem to them to be good for them in the short term. And rules that have perverse incentive effects that are harmful to the individual, the group, or both? Definitely out. This type of system design is like a haiku—very restricted in what design choices are permissible, but not impossible in principle. Seems worth trying because if successful, everything is good with no coercion. If even a tiny subsystem can be designed (or the current design tweaked) in this way, that by itself is good. And the right local/individual move to influence the systems of which you are a part towards that state, as a cognitively-limited individual who can’t hold the whole of complex systems in their mind and accurately predict the effect of proposed changes out into the far future, might be as simple as saying “in this instance, you’re stifling the individual” and “in this instance you’re harming the group/long-term future” wherever you see it, until eventually you get a system that does neither. Like arriving at a haiku by pointing out every time the rules of haiku construction are violated.
I disagree a lot! Many things have gotten better! Is sufferage, abolition, democracy, property rights etc not significant? All the random stuff eg better angels of our nature claims has gotten better.
Either things have improved in the past or they haven’t, and either people trying to “steer the future” in some sense have been influential on these improvements. I think things have improved, and I think there’s definitely not strong evidence that people trying to steer the future was always useless. Because trying to steer the future is very important and motivating, i try to do it.
Yes the counterfactual impact of you individually trying to steer the future may or may not be insignificant, but people trying to steer the future is better than no one doing that!
“Let’s abolish slavery,” when proposed, would make the world better now as well as later.
I’m not against trying to make things better!
I’m against doing things that are strongly bad for present-day people to increase the odds of long-run human species survival.
Proposal: For any given system, there’s a destiny based on what happens when it’s developed to its full extent. Sight is an example of this, where both human eyes and octopus eyes and cameras have ended up using lenses to steer light, despite being independent developments.
“I love whatever is the destiny” is, as you say, no loyalty and no standards. But, you can try to learn what the destiny is, and then on the basis of that decide whether to love or oppose it.
Plants and solar panels are the natural destiny for earthly solar energy. Do you like solarpunk? If so, good news, you can love the destiny, not because you love whatever is the destiny, but because your standards align with the destiny.
People who love solarpunk don’t obviously love computronium dyson spheres tho
That is true, though:
1) Regarding tiling the universy with computronium as destiny is Gnostic heresy.
2) I would like to learn more about the ecology of space infrastructure. Intuitively it seems to me like the Earth is much more habitable than anywhere else, and so I would expect sarah’s “this is so futile” point to actually be inverted when it comes to e.g. a Dyson sphere, where the stagnation-inducing worldwide regulation regulation will by-default be stronger than the entropic pressure.
More generally, I have a concept I call the “infinite world approximation”, which I think held until ~WWI. Under this approximation, your methods have to be robust against arbitrary adversaries, because they could invade from parts of the ecology you know nothing about. However, this approximation fails for Earth-scale phenomena, since Earth-scale organizations could shoot down any attempt at space colonization.
Are you saying this because you worship the sun?
I would more say the opposite: Henri Bergson (better known for inventing vitalism) convinced me that there ought to be a simple explanation for the forms life takes, and so I spent a while performing root cause analysis on that, and ended up with the sun as the creator.
This post reads like it’s trying to express an attitude or put forward a narrative frame, rather than trying to describe the world.
Many of these claims seem obviously false, if I take them at face value at take a moment to consider what they’re claiming and whether it’s true.
e.g., On the first two bullet points it’s easy to come up with counterexamples. Some successful attempts to steer the future, by stopping people from doing locally self-interested & non-violent things, include: patent law (“To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries”) and banning lead in gasoline. As well as some others that I now see that other commenters have mentioned.
It seems like it makes some difference whether our civilization collapses the way that the Roman Empire collapsed, the way that the British Empire collapsed, or the way that the Soviet Union collapsed. “We must prevent our civilization from ever collapsing” is clearly an implausible goal, but “we should ensure that a successor structure exists and is not much worse than what we have now” seems rather more reasonable, no?
Is it too much to declare this the manifesto of a new philosophical school, Constantinism?
wait and see if i still believe it tomorrow!
I don’t think it was articulated quite right—it’s more negative than my overall stance (I wrote it when unhappy) and a little too short-termist.
I do still believe that the future is unpredictable, that we should not try to “constrain” or “bind” all of humanity forever using authoritarian means, and that there are many many fates worse than death and we should not destroy everything we love for “brute” survival.
And, also, I feel that transience is normal and only a bit sad. It’s good to save lives, but mortality is pretty “priced in” to my sense of how the world works. It’s good to work on things that you hope will live beyond you, but Dark Ages and collapses are similarly “priced in” as normal for me. Sara Teasdale: “You say there is no love, my love, unless it lasts for aye; Ah folly, there are episodes far better than the play!” If our days are as a passing shadow, that’s not that bad; we’re used to it.
I worry that people who are not ok with transience may turn themselves into monsters so they can still “win”—even though the meaning of “winning” is so changed it isn’t worth it any more.
I do think this comes back to the messages in On Green and also why the post went down like a cup of cold sick—rationality is about winning. Obviously nobody on LW wants to “win” in the sense you describe, but more winning over more harmony on the margin, I think.
The future will probably contain less of the way of life I value (or something entirely orthogonal), but then that’s the nature of things.
I think 2 cruxes IMO dominate the discussion a lot that are relevant here:
Will a value lock-in event happen, especially soon in a way such that once the values are locked in, it’s basically impossible to change values?
Is something like the vulnerable world hypothesis correct about technological development?
If you believed 1 or 2, I could see why people disagreed with Sarah Constantin’s statement on here.
I have been having some similar thoughts on the main points here for a while and thanks for this.
I guess to me what needs attention is when people do things along the lines of “benefit themselves and harm other people”. That harm has a pretty strict definition, though I know we may always be able to give borderline examples. This definitely includes the abuse of power in our current society and culture, and any current risks etc. (For example, if we are constraining to just AI with warning on content, https://www.iwf.org.uk/media/q4zll2ya/iwf-ai-csam-report_public-oct23v1.pdf. And this is very sad to see.) On the other hand, with regards to climate change (can also be current too) or AI risks, it probably should also be concerned when corporates or developers neglect known risks or pursue science/development irresponsibly. I think it is not wrong to work on these, but I just don’t believe in “do not solve the other current risks and only work on future risks.”
On some comments that were saying our society is “getting better”—sure, but the baseline is a very low bar (slavery for example). There are still many, many, many examples in different societies of how things are still very systematically messed up.
You seem to dislike reality. Could it not be that the worldview which clashes with reality is wrong (or rather, in the wrong), rather than reality being wrong/in the wrong? For instance that “nothing is forever” isn’t a design flaw, but one of the required properties that a universe must have in order to support life?