DM me anything
I’m nerveWrangler on discord
DM me anything
I’m nerveWrangler on discord
I should have done the second; I was mistaken that clicking “Read More” in the commenting guidelines would not reward me with sufficient clarity about Duncan’s elaborate standards; I apologize for my rude behavior.
I insist that you either always use it non-violently or always explain why it does not just mean ‘being weird and disagreeable’, and also why it doesn’t mean anything else that is entirely morally irrelevant either, because you should never be cruel over anything that is morally irrelevant.
Why the downvotes? “Lizardman” is a great status-reducing thing to call a person just for being too weird and disagreeable! :)
This was the original reasoning behind judges-elected-for-life—that society needed principled men and women of discernment who did not need to placate or cater to lizardman.
After all, no one of discernment would ever heed a true lizardman. They know the difference between someone who seems like a lizardman and someone who is a lizardman.
No sane person can disagree with any of this succinctly on the level of truth nor of misleadingness; excellently written.
If AI copied all human body layouts down to the subatomic level, then re-engineered all human bodies so they were no longer recognizably human but rather something human-objectively superior, then gave all former humans the option to change back to their original forms, would this have been a good thing to do?
I think so!
It has been warned in ominous tones that “nothing human survives into the far future.”
I’m not sure human-objectivity permits humanity to remain mostly-recognizably human, but it does require that former humans have the freedom to change back if they wish, and I’m sure that many would, and that would satisfy the criterion of something human surviving the far future.
(I apologize for being, or skirting too close to the edges of being, too political. I accept downvotes as the fair price and promise no begrudgement for it.)
I have an observation that I want more widely appreciated by low-contextualizers (who may be high or low in decoupling as well; they are independent axes): insisting that conversations happen purely in terms of the bet-resolvable portion of reality, without an omniscient being to help out as bet arbiter, can be frame control.
Status quos contain self-validating reductions, and people looking to score Pragmatic Paternalist status points can frame predictable bet outcomes as vindication of complacence with arbitrary, unreasonably and bullyishly exercised, often violent, vastly intrinsic-value-sacrificial power, on the basis of the weirdness and demonstrably inconvenient political ambitiousness of fixing the situation.
They seem to think, out of entitlement to epistemic propriety, that there must be some amount of non-[philosophical-arguments]-based evidence that should discourage a person from trying to resolve vastly objectively evil situations that neither the laws of physics, nor any other [human-will]-independent laws of nature, require or forbid. They are mistaken.
If that sounds too much like an argument for communism, get over it; I love free markets and making Warren Buffett the Chairman of America is no priority of mine.
If it sounds too much like an argument for denying biological realities, get over it; I’m not asking for total equality, I’m just asking for moral competence on behalf of institutions and individuals with respect to biological realities, and I detest censorship of all the typical victims, though I make exception for genuine infohazards.
If you think my standards are too high for humanity, were Benjamin Lay’s also too high? I think his efforts paid off even if our world is still not perfect; I would like to have a comparable effect, were I not occupied with learning statistics so that I can help align AI for this guilty species.
If you think factory farmed animals have things worse than children… Yes. But I am alienated by EA’s relative quietude; you may not see it this way, but so-called lip service is an invitation for privately conducted accountability negotiation, and I value that immensely as a foundation for change.
Engineering and gaming are just other words for understanding the constraints deeply enough to find the paths to desired (by the engineer) results.
Yes.
The words you choose are political, with embedded intentional beliefs, not definitional and objective about the actions themselves.
Well now that was out of left-field! People don’t normally say that without having a broader disagreement at play. I suppose you have a more-objective reform-to-my-words prepared to offer me? My point about the letter of the law being more superficial than the spirit seems like a robust observation, and I think my choice of words accurately, impartially, and non-misleadingly preserves that observation;
until you have a specific argument against the objectivity, your response amounts to an ambiguously adversarially-worded request to imagine I was systematically wrong and report back my change of mind. I would like you to point my imagination in a promising direction; a direction that seems promising for producing a shift in belief.
Funny that you think gameability is closer to engineering; I had it in mind that exceptioncraft was closer. To my mind, gameability is more like rules-lawyering the letter of the law, whereas exceptioncraft relies on the spirit of the law. Syntactic vs semantic kinda situation.
Arbitrary incompleteness invites gameability, and arbitrary specificity invites exceptioncraft.
You can quote text using a caret (>) and a space.
Surely to be truthful is to be non-misleading...?
Read the linked post; this is not so. You can mislead with the truth. You can speak a wholly true collection of facts that misleads people. If someone misleads using a fully true collection of facts, saying they spoke untruthfully is confusing. Truth cannot just always lead to good inferences; truth does not have to be convenient, as you say in OP. Truth can make you infer falsehoods.
Saying you put the value of truth above your value of morality on your list of values is analogous to saying you put your moral of truth above your moral of values; it’s like saying bananas are more fruity to you than fruits.
Where does non-misleadingness fall on your list of supposedly amoral values such as truth and morality? Is non-misleadingness higher than truth or lower?
The existence of natural abstractions is entirely compatible with the existence of language games. There are correct and incorrect ways to play language games.
Dialogue trees are the substrate of language games, and broader reality is the substrate of dialogue trees. Dialogue trees afford taking dialogical moves that are more or less arbitrary. A guy who goes around saying “claiming land for yourself and enforcing your claim is justice; Nozick is intelligent and his entitlement theory of justice vindicates my claim” will leave exact impressions on exact types of people, who will in turn respond in ways that are characteristic of themselves. Every branch of the dialogue tree will leave an audience with an impression of who is right, and some audiences have measurably better calibration.
Just because no one can draw perfect triangles doesn’t mean it’s nonsense to talk about such things.
In the sequences, Yudkowsky has remarked over and over that it is futile to protest that you acted with propriety if you do not achieve the correct answer; read the 12th virtue
No; pointless for me to complain, to be clear.
The Principle of Nameless Heartsmarts: It is pointless to complain that I acted with propriety if in the end I was too dense to any relevant consideration.
You can’t say values “aren’t objective” without some semantic sense of objectivity that they are failing to fulfill.
If you can communicate such a sense to me, I can give you values to match. That doesn’t mean your sense of objectivity will have been perfect and unarbitrary; perhaps I will want to reconcile with you about our different notions of objectivity.
Still, I’m damn going to try to be objectively good.
It just so happens that my values connote all of your values, minus the part about being culturally local; funny how that works.
If you explicitly tell me that your terminal values require culturally local connotations then I can infer you would have been equally happy with different values had you been born in a different time or place. I would like to think that my conscience is like that of Sejong the Great’s and Benjamin Lay’s: relatively less dependent on my culture’s sticks and carrots.
The dictionary defines arbitrary as:
based on random choice or personal whim, rather than any reason or system
The more considerate and reasoned your choice, the less random it is. If the truth is that your way of being considerate and systematic isn’t as good as it could have been, that truth is systematic and not magical. The reason for the non-maximal goodness of your policy is a reason you did not consider. The less considerate, the more arbitrary.
There is no real reason to choose either the left or right side of the road for driving but it’s very useful to choose either of them.
Actually there are real reasons to choose left or right when designing your policy; you can appeal to human psychology; human psychology does not treat left and right exactly the same.
If one person says I don’t really need that many error codes, I don’t want to follow arbitrary choices and send 44 instead of 404, this creates a mess for everyone who expects the standard to be followed.
If the mess created for everyone else truly outweighs the goodness of choosing 44, then it is arbitrary to prefer 44. You cannot make true arbitrariness truly strategic just by calling it so; there are facts of the matter besides your stereotypes. People using the word “arbitrary” to refer to something that is based on greater consideration quality are wrong by your dictionary definition and the true definition as well.
You are wrong in your conception of arbitrariness as being all-or-nothing; there are varying degrees, just as there are varying degrees of efficiency between chess players. A chess player, Bob, half as efficient as Kasparov, makes a lower-quality sum of considerations; not following Kasparov’s advice is arbitrary unless Bob can know somehow that he made better considerations in this case;
maybe Bob studied Kasparov’s biases carefully by attending to the common themes of his blunders, and the advice he’s receiving for this exact move looks a lot like a case where Kasparov would blunder. Perhaps in such a case Bob will be wrong and his disobedience will be arbitrary on net, but the disobedience in that case will be a lot less arbitrary than all his other opportunities to disobey Kasparov.
A policy that could be better — could be more good — is arbitrarily bad. In fact the phrase “arbitrarily bad” is redundant; you can just say “arbitrary.”
It is better to be predictably good than surprisingly bad, and it is better to be surprisingly good than predictably bad; that much will be obvious to everyone.
I think it is better to be surprisingly good than predictably good, and it is better to be predictably bad than surprisingly bad.
EDIT: wait, I’m not sure that’s right even by deontology’s standards; as a general categorical imperative, if you can predict something will be bad, you should do something surprisingly good instead, even if the predictability of the badness supposedly makes it easier for others to handle. No amount of predictable badness is easier for others to handle than surprising goodness.
EDIT EDIT: I find the implication that we can only choose between predictable badness and surprising badness to be very rarely true, but when it is true then perhaps we should choose to be predictable. Inevitably, people with more intelligence will keep conflicting with people with less intelligence about this; less intelligent people will keep seeing situations as choices between predictable badness and surprising badness, and more intelligent people will keep seeing situations as choices between predictable badness and surprising goodness.
Focusing on predictability is a strategy for people who are trying to minimize their expectedly inevitable badness. Focusing on goodness is a strategy for people who are trying to secure their expectedly inevitable weirdness.
No standing with whom? I am requesting that you not be cruel over shallow and irrelevant matters; that is exactly what I should be doing here no matter the density and inconsiderateness of you or anyone else.
My standing with Omniscient beings is the standing that should primarily matter to allegedly rational people.