If you are one of those people and are not fully committed to the cause, I am asking you, why are you not doing more?
To some extent because I am busy asking myself questions like: What are the moral reasons that seem as if they point toward fully committing myself to the cause? Do they actually imply what I think they imply? Where do moral reasons in general get their justification? Where do beliefs in general get their justification? How should I act in the presence of uncertainty about how justification works? How should I act in the presence of structural uncertainty about how the world works (both phenomenologically and metaphysically)? How should I direct my inquiries about moral justification and about the world in a way that is most likely to itself be justified? How should I act in the presence of uncertainty about how uncertainty itself works? How can I be more meta? What causes me to provisionally assume that being meta is morally justified? Are the causes of my assumption normatively justifiable? What are the properties of “meta” that make it seem important, and is there a wider class of concepts that “meta” is an example or special case of?
(Somewhat more object-level questions include:) Is SIAI actually a good organization? How to I determine goodness? How do baselines work in general? Should I endorse SIAI? What institutions/preferences/drives am I tacitly endorsing? Do I know why I am endorsing them? What counts as endorsement? What counts as consent? Specifically, what counts as unreserved consent to be deluded? Is the cognitive/motivational system that I have been coerced into or have engineered itself justifiable as a platform for engaging in inquiries about justification? What are local improvements that might be made to said cognitive/motivational system? Why do I think those improvements wouldn’t have predictably-stupid-in-rerospect consequences? Are the principles by which I judge the goodness of badness of societal endeavors consistent with the principles by which I judge the goodness or badness of endeavors at other levels of organization? If not, why not? What am I? Where am I? Who am I? What am I doing? Why am I doing it? What would count as satisfactory answers to each of those questions, and what criteria am I using to determine satisfactory-ness for answers to each of those questions? What should I do if I don’t have satisfactory answers to each of those questions?
To some extent because I am busy asking myself questions like: What are the moral reasons that seem as if they point toward fully committing myself to the cause? Do they actually imply what I think they imply? Where do moral reasons in general get their justification? Where do beliefs in general get their justification? How should I act in the presence of uncertainty about how justification works? How should I act in the presence of structural uncertainty about how the world works (both phenomenologically and metaphysically)? How should I direct my inquiries about moral justification and about the world in a way that is most likely to itself be justified? How should I act in the presence of uncertainty about how uncertainty itself works? How can I be more meta? What causes me to provisionally assume that being meta is morally justified? Are the causes of my assumption normatively justifiable? What are the properties of “meta” that make it seem important, and is there a wider class of concepts that “meta” is an example or special case of?
(Somewhat more object-level questions include:) Is SIAI actually a good organization? How to I determine goodness? How do baselines work in general? Should I endorse SIAI? What institutions/preferences/drives am I tacitly endorsing? Do I know why I am endorsing them? What counts as endorsement? What counts as consent? Specifically, what counts as unreserved consent to be deluded? Is the cognitive/motivational system that I have been coerced into or have engineered itself justifiable as a platform for engaging in inquiries about justification? What are local improvements that might be made to said cognitive/motivational system? Why do I think those improvements wouldn’t have predictably-stupid-in-rerospect consequences? Are the principles by which I judge the goodness of badness of societal endeavors consistent with the principles by which I judge the goodness or badness of endeavors at other levels of organization? If not, why not? What am I? Where am I? Who am I? What am I doing? Why am I doing it? What would count as satisfactory answers to each of those questions, and what criteria am I using to determine satisfactory-ness for answers to each of those questions? What should I do if I don’t have satisfactory answers to each of those questions?
Et cetera, ad infinitum.
I’m loving the fact that “How to I determine goodness?” and “What counts as consent?” are, in this context, “somewhat more object-level questions.”