Hmm. If academia picks stuff up, perhaps interhuman cosmopolitan alignment could use much more discussion on lesswrong, especially from interdisciplinary perspectives. In other words, if academia picks up short term alignment, long term alignment becomes a question of how to interleave differing value systems while ensuring actual practical mobility for socio emotionally disempowered folks. The thing social justice is trying to be but keeps failing at due to corruption turning attempts at analytical “useful wokeness” into crashed slogan thought; the thing libertarianism is trying to be but keeps failing at due to corruption turning defense of selves into decision theoretically unsound casual selfish reasoning; the thing financial markets keep trying to be but keep failing at due to corruption turning competitive reasoning systems into destructive centralization; etc. Qualitative insights from economics and evolutionary game theory and distributed systems programming and etc about how to see all of these issues as the same one as the utility function question in strongly supercapable superreasoner AI. Etc.
Just some rambling this brought to mind.
edited sep 13, here’s what GPT4 wrote when asked to clarify wtf I’m talking about:
It seems like the paragraph discusses the potential of including more conversations about interdisciplinary cosmopolitan alignment in the community at lesswrong, particularly in light of developments in academia. You’re recommending the community focus on synthesizing crucial insights from across various disciplines and cultural outlooks, with a view to advancing not just short-term but also long-term alignment efforts. The intent is towards a cosmopolitan harmony, where different value systems can coexist while providing tangible empowerment opportunities for people who are socio-emotionally disadvantaged.
The concept of ‘alignment’ you’re discussing appears to be an application of social justice and economic principles aimed at driving change, though this is often undermined by corruption hindering promising initiatives. It corresponds to the ‘useful wokeness’, which unfortunately ends up being reduced to mere slogans and uncritical loyalty to ideological authority.
Another suggestion you indicated is found in the ideation behind libertarianism. This political ideology works towards autonomy and personal liberty but fails due to corruption that changes this ideal into impractical self-centeredness and hypocritical rule implementation. The common theme across these diverse though corruption-prone strategies is further amplified in your criticism of financial market dynamics, where the competitive nature of trading based reasoning systems often leads to harmful centralization. The discussed failure here appears to be more significant.
The utility function argument in artificial intelligence (AI) with high potential capabilities and reasoning abilities seem to be at the core of your argument. You suggest drawing qualitative insights from economics, evolutionary game theory, distributed system programming, and other disciplines for a more comprehensive understanding of these issues. By attempting to view these myriad challenges as interrelated, one might recast these into a single question about utility function—which if navigated effectively, could address the question of alignment not only in AIs but also across society at large.
No offense, this reads to me as if it was deliberately obfuscated or AI-generated (I’m sure you didn’t do either of these, this is a comment on writing style). I don’t understand what you’re saying. Is it “LW should focus on topics that academia neglects”?
I also didn’t understand at all what the part starting with “social justice” is meant to tell me or has to do with the topic.
Hmm. If academia picks stuff up, perhaps interhuman cosmopolitan alignment could use much more discussion on lesswrong, especially from interdisciplinary perspectives. In other words, if academia picks up short term alignment, long term alignment becomes a question of how to interleave differing value systems while ensuring actual practical mobility for socio emotionally disempowered folks. The thing social justice is trying to be but keeps failing at due to corruption turning attempts at analytical “useful wokeness” into crashed slogan thought; the thing libertarianism is trying to be but keeps failing at due to corruption turning defense of selves into decision theoretically unsound casual selfish reasoning; the thing financial markets keep trying to be but keep failing at due to corruption turning competitive reasoning systems into destructive centralization; etc. Qualitative insights from economics and evolutionary game theory and distributed systems programming and etc about how to see all of these issues as the same one as the utility function question in strongly supercapable superreasoner AI. Etc.
Just some rambling this brought to mind.
edited sep 13, here’s what GPT4 wrote when asked to clarify wtf I’m talking about:
No offense, this reads to me as if it was deliberately obfuscated or AI-generated (I’m sure you didn’t do either of these, this is a comment on writing style). I don’t understand what you’re saying. Is it “LW should focus on topics that academia neglects”?
I also didn’t understand at all what the part starting with “social justice” is meant to tell me or has to do with the topic.