What I am sensing or seeing right now is in order to promote AGI risks and X risks, I am seeing people downplay/lower ranking the importance of the people who already cannot see this beautiful world. It does not feel logical to me, that because there are always future issues that affect everyone’s lives, the issues that cause some people to be already on the miserable side should not be addressed.
The problem here is that we are not have the same starting line for “everyone” now, and therefore progress towards saving everyone with future focus might not mean the same thing. I maybe should draw a graph to demonstrate my point. As opposed to only consider problems that concerns everyone, I think we should also focus a bit more on an inclusive finish line that is connected to current realities of not the same starting line. If this world continues to be this way, I also worry if the future generations would like it, or would want to be brought into this.
I understand the utilitarian intentions, but I myself also believe we could incorporate equalitarian views. And in fact, a mindset or rules promoting equality or along similar lines actually helps everyone. In many situations a human will be one of those people at some point in their life in some way. Maybe a person’s home suddenly became war zone. Maybe got disabled suddenly. Maybe experienced sexual assault for self or loved one. Establishing a good system to reduce these and prevent these helps human in the future as well. I would like to formalize this a bit more later.
Both views/also current vs future views should really joint forces, as opposed to exclude each other. There are many tasks that I see are shared such as social good mindsets and governance.
Some background about me; myself believe in looking into both, and believe in value in looking into both. It would be dangerous to focus on only one either way by promoting another, and gradually we overlook/go backwards on things that we have started.
I see your concern. I don’t think that people who are currently disadvantaged will remain behind in the expansion toward infinity. If the rich who create AGI control the future and choose to be dicks about it, all of the rest of us are screwed, not just the currently-poor.
If those who create AGI choose to use it for the common good, it will very quickly elevate the poor to equality with the current top .01%, in terms of educational opportunities. And they will be effectively wealthier than the top .01%.
That’s why I see working on AGI alignment (and the societal alignment sub-problem; trying to make sure the people who control aligned AGI/ASI aren’t total dicks or foolish about it so we die anyway) is by far the most likely thing we can do to make the world better for the disadvantaged.
Because we are not remotely on top of this shit, so there’s a very good chance we all get oblivion instead of paradise-on-earth. And all of us have finite time and energy to spend.
Thanks for the thoughtful comments first of all.
What I am sensing or seeing right now is in order to promote AGI risks and X risks, I am seeing people downplay/lower ranking the importance of the people who already cannot see this beautiful world. It does not feel logical to me, that because there are always future issues that affect everyone’s lives, the issues that cause some people to be already on the miserable side should not be addressed.
The problem here is that we are not have the same starting line for “everyone” now, and therefore progress towards saving everyone with future focus might not mean the same thing. I maybe should draw a graph to demonstrate my point. As opposed to only consider problems that concerns everyone, I think we should also focus a bit more on an inclusive finish line that is connected to current realities of not the same starting line. If this world continues to be this way, I also worry if the future generations would like it, or would want to be brought into this.
I understand the utilitarian intentions, but I myself also believe we could incorporate equalitarian views. And in fact, a mindset or rules promoting equality or along similar lines actually helps everyone. In many situations a human will be one of those people at some point in their life in some way. Maybe a person’s home suddenly became war zone. Maybe got disabled suddenly. Maybe experienced sexual assault for self or loved one. Establishing a good system to reduce these and prevent these helps human in the future as well. I would like to formalize this a bit more later.
Both views/also current vs future views should really joint forces, as opposed to exclude each other. There are many tasks that I see are shared such as social good mindsets and governance.
Some background about me; myself believe in looking into both, and believe in value in looking into both. It would be dangerous to focus on only one either way by promoting another, and gradually we overlook/go backwards on things that we have started.
I see your concern. I don’t think that people who are currently disadvantaged will remain behind in the expansion toward infinity. If the rich who create AGI control the future and choose to be dicks about it, all of the rest of us are screwed, not just the currently-poor.
If those who create AGI choose to use it for the common good, it will very quickly elevate the poor to equality with the current top .01%, in terms of educational opportunities. And they will be effectively wealthier than the top .01%.
That’s why I see working on AGI alignment (and the societal alignment sub-problem; trying to make sure the people who control aligned AGI/ASI aren’t total dicks or foolish about it so we die anyway) is by far the most likely thing we can do to make the world better for the disadvantaged.
Because we are not remotely on top of this shit, so there’s a very good chance we all get oblivion instead of paradise-on-earth. And all of us have finite time and energy to spend.