@gabe_cc
Gabriel Alfour
Ideologies are slow and necessary, for now
Conjecture: A Roadmap for Cognitive Software and A Humanist Future of AI
The Compendium, A full argument about extinction risk from AGI
“Epistemic range of motion” and LessWrong moderation
There is no IQ for AI
For Civilization and Against Niceness
On Lies and Liars
By this logic, any instrumental action taken towards an altruistic goal would be “for personal gain”.
I think you are making a genuine mistake, and that I could have been clearer.
There are instrumental actions that favour everyone (raising epistemic standards), and instrumental actions that favour you (making money).
The latter are for personal gains, regardless of your end goals.
Sorry for not getting deeper into it in this comment. This is quite a vast topic.
I might instead write a longer post about the interactions of deontology & consequentialism, and egoism & altruism.
Lying is Cowardice, not Strategy
Cognitive Emulation: A Naive AI Safety Proposal
Christiano (ARC) and GA (Conjecture) Discuss Alignment Cruxes
(I strongly upvoted the comment to signal boost it, and possibly let people who agree easily express their agreement to it directly if they don’t have any specific meta-level observation to share)
Happy to have a public conversation on the topic. Just DM me on Twitter if you are interested.