Allostasis is a more biologically plausible explanation of “what a brain does” than homeostasis, but to your point: I do think optimizing for happiness and doing kinda-homeostasis are “just the same somehow”.
I have a slightly circular view that the extension of happiness exists as an output of a network with 86 billion neurons and 60 trillion connections, and that it is a thing that the brain can optimize for. Even if the intension of happiness as defined by a few English sentences is not the thing, and even if optimization for slightly different things would be very fragile, the attractor of happiness might be very small and surrounded by dystopian tar pits, I do think it is something that exists in the real world and is worth searching for.
Though if we cannot find any intension that is useful, perhaps other approaches to AI Alignment and not the “search for human happiness” will be more practical.
Allostasis is a more biologically plausible explanation of “what a brain does” than homeostasis, but to your point: I do think optimizing for happiness and doing kinda-homeostasis are “just the same somehow”.
I have a slightly circular view that the extension of happiness exists as an output of a network with 86 billion neurons and 60 trillion connections, and that it is a thing that the brain can optimize for. Even if the intension of happiness as defined by a few English sentences is not the thing, and even if optimization for slightly different things would be very fragile, the attractor of happiness might be very small and surrounded by dystopian tar pits, I do think it is something that exists in the real world and is worth searching for.
Though if we cannot find any intension that is useful, perhaps other approaches to AI Alignment and not the “search for human happiness” will be more practical.