(I apologize for being, or skirting too close to the edges of being, too political. I accept downvotes as the fair price and promise no begrudgement for it.)
I have an observation that I want more widely appreciated by low-contextualizers (who may be high or low in decoupling as well; they are independent axes): insisting that conversations happen purely in terms of the bet-resolvable portion of reality, without an omniscient being to help out as bet arbiter, can be frame control.
Status quos contain self-validating reductions, and people looking to score Pragmatic Paternalist status points can frame predictable bet outcomes as vindication of complacence with arbitrary, unreasonably and bullyishly exercised, often violent, vastly intrinsic-value-sacrificial power, on the basis of the weirdness and demonstrably inconvenient political ambitiousness of fixing the situation.
They seem to think, out of entitlement to epistemic propriety, that there must be some amount of non-[philosophical-arguments]-based evidence that should discourage a person from trying to resolve vastly objectively evil situations that neither the laws of physics, nor any other [human-will]-independent laws of nature, require or forbid. They are mistaken.
If that sounds too much like an argument for communism, get over it; I love free markets and making Warren Buffett the Chairman of America is no priority of mine.
If it sounds too much like an argument for denying biological realities, get over it; I’m not asking for total equality, I’m just asking for moral competence on behalf of institutions and individuals with respect to biological realities, and I detest censorship of all the typical victims, though I make exception for genuine infohazards.
If you think my standards are too high for humanity, were Benjamin Lay’s also too high? I think his efforts paid off even if our world is still not perfect; I would like to have a comparable effect, were I not occupied with learning statistics so that I can help align AI for this guilty species.
If you think factory farmed animals have things worse than children… Yes. But I am alienated by EA’s relative quietude; you may not see it this way, but so-called lip service is an invitation for privately conducted accountability negotiation, and I value that immensely as a foundation for change.
(I apologize for being, or skirting too close to the edges of being, too political. I accept downvotes as the fair price and promise no begrudgement for it.)
I have an observation that I want more widely appreciated by low-contextualizers (who may be high or low in decoupling as well; they are independent axes): insisting that conversations happen purely in terms of the bet-resolvable portion of reality, without an omniscient being to help out as bet arbiter, can be frame control.
Status quos contain self-validating reductions, and people looking to score Pragmatic Paternalist status points can frame predictable bet outcomes as vindication of complacence with arbitrary, unreasonably and bullyishly exercised, often violent, vastly intrinsic-value-sacrificial power, on the basis of the weirdness and demonstrably inconvenient political ambitiousness of fixing the situation.
They seem to think, out of entitlement to epistemic propriety, that there must be some amount of non-[philosophical-arguments]-based evidence that should discourage a person from trying to resolve vastly objectively evil situations that neither the laws of physics, nor any other [human-will]-independent laws of nature, require or forbid. They are mistaken.
If that sounds too much like an argument for communism, get over it; I love free markets and making Warren Buffett the Chairman of America is no priority of mine.
If it sounds too much like an argument for denying biological realities, get over it; I’m not asking for total equality, I’m just asking for moral competence on behalf of institutions and individuals with respect to biological realities, and I detest censorship of all the typical victims, though I make exception for genuine infohazards.
If you think my standards are too high for humanity, were Benjamin Lay’s also too high? I think his efforts paid off even if our world is still not perfect; I would like to have a comparable effect, were I not occupied with learning statistics so that I can help align AI for this guilty species.
If you think factory farmed animals have things worse than children… Yes. But I am alienated by EA’s relative quietude; you may not see it this way, but so-called lip service is an invitation for privately conducted accountability negotiation, and I value that immensely as a foundation for change.