As someone who does value happiness alone, I’d like to say that it’s still not that simple (there’s no known way to calculate the happiness of a given system), and that I understand full well that maximizing it would be the end of all life as we know it. What we do end up will be very, very happy, and that’s good enough for me, even if it isn’t really anything besides happy (such as remotely intelligent).
As someone who does value happiness alone, I’d like to say that it’s still not that simple (there’s no known way to calculate the happiness of a given system), and that I understand full well that maximizing it would be the end of all life as we know it. What we do end up will be very, very happy, and that’s good enough for me, even if it isn’t really anything besides happy (such as remotely intelligent).