I like this perspective. I guess I was seeing “becoming a celebrity” as a choice of some sort or a separate thing. But it does seem that the problem is entertainment, and there is a big spectrum of people trying to solve it with different means.
Looking at it like that, trying to solve entertainment is definitely not a bad thing. Just maybe less effective at saving/improving lives than some other career paths.
Would be interesting to somehow compare the impact of a doctor/philanthropist to an entertainer.
Looking at it like that, trying to solve entertainment is definitely not a bad thing. Just maybe less effective at saving/improving lives than some other career paths.
For an EA, being less effective at saving/improving lives is a bad thing. It is the bad thing. That is practically the definition of EA.
Wouldn’t you agree though, that one should probably not always do the number1effective thing? Can we even really say confidently which thing is most effective?
I’m not a utilitarian or an A, E or otherwise, so it would be better for someone who is to answer that. But emulating that role as best I can: Of course (a utilitarian would say) one should always do the number one effective thing, if one knows what it is. If one is unsure, then put numbers on the uncertainties and do the number one most-effective-in-expectation thing. If you want to take high vs. low variance of outcome into account (as SBF notably did not), just add that into the utility function. That is what utilitarianism is, and EA is utilitarianism applied to global wellbeing.
I like this perspective. I guess I was seeing “becoming a celebrity” as a choice of some sort or a separate thing. But it does seem that the problem is entertainment, and there is a big spectrum of people trying to solve it with different means.
Looking at it like that, trying to solve entertainment is definitely not a bad thing. Just maybe less effective at saving/improving lives than some other career paths.
Would be interesting to somehow compare the impact of a doctor/philanthropist to an entertainer.
Either way, thanks for sharing!
For an EA, being less effective at saving/improving lives is a bad thing. It is the bad thing. That is practically the definition of EA.
Wouldn’t you agree though, that one should probably not always do the number1effective thing? Can we even really say confidently which thing is most effective?
I’m not a utilitarian or an A, E or otherwise, so it would be better for someone who is to answer that. But emulating that role as best I can: Of course (a utilitarian would say) one should always do the number one effective thing, if one knows what it is. If one is unsure, then put numbers on the uncertainties and do the number one most-effective-in-expectation thing. If you want to take high vs. low variance of outcome into account (as SBF notably did not), just add that into the utility function. That is what utilitarianism is, and EA is utilitarianism applied to global wellbeing.