So now after looking into the “last man” and “men without chests” concepts, I think the relevant quote from “men without chests” is at the end:
The Apostle Paul writes, “The aim of our charge is love that issues from a pure heart and a good conscience and a sincere faith (1 Timothy 1:5, ESV).”
If followers of Christ live as people with chests—strong hearts filled with God’s truth—the world will take notice.
“Men without chests” are then pure selfish rational agents devoid of altruism/love. I agree that naively maximizing the empowerment of a single human brain or physical agent could be a tragic failure. I think there are two potential solution paths to this, which I hint at in the diagram (which clearly is empowering a bunch of agents) and I mentioned a few places in the article but should have more clearly discussed.
One solution I mention is to empower humanity or agency more broadly, which then naturally handles altruism, but leaves open how to compute the aggregate estimate and may require some approximate solution to social decision theory aka governance. Or maybe not, perhaps just computing empowerment assuming a single coordinated mega-agent works. Not sure yet.
The other potential solution is to recognize that brains really aren’t the agents of relevance, and instead we need to move to a more detailed distributed simulacra theory of mind. The brain is the hardware, but the true agents are distributed software minds that coordinate simulacras across multiple brains. So as you are reading this your self simulacra is listening to a local simulacra of myself, and in writing this my self simulacra is partially simulating a simulacra of you, etc. Altruism and selfishness are then different flavours of local simulacra governance systems, with altruism being something vaguely more similar to democracy and selfishness more similar to autocracy. When our brain imagines future conversations with someone that particular simulacra gains almost as much simulation attention as our self simulacra—the internal dynamics are similar to simulacra in LLM models, which shouldn’t be surprising because our cortex is largely trained by self supervised prediction like LLM on similar data.
So handling altruism is important, but I think it’s just equivalent to handling cooperative/social utility aggregation—which any full solution needs.
The last man concept doesn’t seem to map correctly:
The last man is the archetypal passive nihilist. He is tired of life, takes no risks, and seeks only comfort and security. Therefore, The Last Man is unable to build and act upon a self-actualized ethos.
That seems more like the opposite of an empowerment optimizer—perhaps you meant the Ubermensch?
So now after looking into the “last man” and “men without chests” concepts, I think the relevant quote from “men without chests” is at the end:
“Men without chests” are then pure selfish rational agents devoid of altruism/love. I agree that naively maximizing the empowerment of a single human brain or physical agent could be a tragic failure. I think there are two potential solution paths to this, which I hint at in the diagram (which clearly is empowering a bunch of agents) and I mentioned a few places in the article but should have more clearly discussed.
One solution I mention is to empower humanity or agency more broadly, which then naturally handles altruism, but leaves open how to compute the aggregate estimate and may require some approximate solution to social decision theory aka governance. Or maybe not, perhaps just computing empowerment assuming a single coordinated mega-agent works. Not sure yet.
The other potential solution is to recognize that brains really aren’t the agents of relevance, and instead we need to move to a more detailed distributed simulacra theory of mind. The brain is the hardware, but the true agents are distributed software minds that coordinate simulacras across multiple brains. So as you are reading this your self simulacra is listening to a local simulacra of myself, and in writing this my self simulacra is partially simulating a simulacra of you, etc. Altruism and selfishness are then different flavours of local simulacra governance systems, with altruism being something vaguely more similar to democracy and selfishness more similar to autocracy. When our brain imagines future conversations with someone that particular simulacra gains almost as much simulation attention as our self simulacra—the internal dynamics are similar to simulacra in LLM models, which shouldn’t be surprising because our cortex is largely trained by self supervised prediction like LLM on similar data.
So handling altruism is important, but I think it’s just equivalent to handling cooperative/social utility aggregation—which any full solution needs.
The last man concept doesn’t seem to map correctly:
That seems more like the opposite of an empowerment optimizer—perhaps you meant the Ubermensch?