Let me see if I understand you correctly: if someone cares about how Less Wrong is run, what they should do is not comment on Less Wrong—least of all in discussions on Less Wrong about how Less Wrong is run (“meta threads”). Instead, what they should do is move to California and start attending Alicorn’s dinner parties.
Let me see if I understand you correctly: if someone cares about how Less Wrong is run, what they should do is not comment on Less Wrong—least of all in discussions on Less Wrong about how Less Wrong is run (“meta threads”). Instead, what they should do is move to California and start attending Alicorn’s dinner parties.
I don’t see what this has to do with “loss aversion” (the phenomenon where people think losing a dollar is worse than failing to gain a dollar they could have gained), though that’s of course a tangential matter.
The point here is—and I say this with all due respect—it looks to me like you’re rationalizing a decision made for other reasons. What’s really going on here, it seems to me, is that, since you’re lucky enough to be part of a physical community of “similar” people (in which, of course, you happen to have high status), your brain thinks they are the ones who “really matter”—as opposed to abstract characters on the internet who weren’t part of the ancestral environment (and who never fail to critique you whenever they can).
That doesn’t change the fact that this is is an online community, and as such, is for us abstract characters, not your real-life dinner companions. You should be taking advice from the latter about running this site to about the same extent that Alicorn should be taking advice from this site about how to run her dinner parties.
Consider eating Roman-style to increase the intimacy / as a novel experience. Unfortunately, this is made way easier with specialized furniture- but you should be able to improvise with pillows. As well, it is a radically different way to eat that predates the invention of the fork (and so will work fine with hands or chopsticks, but not modern implements).
Consider seating logistics, and experiment with having different people decide who sits where (or next to whom). Dinner parties tend to turn out differently with different arrangements, but different subcultures will have different algorithms for establishing optimal seating, so the experimentation is usually necessary (and having different people decide serves both as a form of blinding and as a way to turn up evidence to isolate the algorithm faster).
Huh, I haven’t been assigning seats at all except for reserving the one with easiest kitchen access for myself. I’ve just been herding people towards the dining table.
since you’re lucky enough to be part of a physical community of “similar” people (in which, of course, you happen to have high status), your brain thinks they are the ones who “really matter”—as opposed to abstract characters on the internet who weren’t part of the ancestral environment (and who never fail to critique you whenever they can).
Was Eliezer “lucky” to have cofounded the Singularity Institute and Overcoming Bias? “Lucky” to have written the Sequences? “Lucky” to have founded LessWrong? “Lucky” to have found kindred minds, both online and in meatspace? Does he just “happen” to be among them?
Or has he, rather, searched them out and created communities for them to come together?
That doesn’t change the fact that this is is an online community, and as such, is for us abstract characters, not your real-life dinner companions. You should be taking advice from the latter about running this site to about the same extent that Alicorn should be taking advice from this site about how to run her dinner parties.
The online community of LessWrong does not own LessWrong. EY owns LessWrong, or some combination of EY, the SI, and whatever small number of other people they choose to share the running of the place with. To a limited extent it is for us, but its governance is not at all by us, and it wouldn’t be LessWrong if it was. The system of government here is enlightened absolutism.
since you’re lucky enough to be part of a physical community of “similar” people
Was Eliezer “lucky” to have cofounded the Singularity Institute and Overcoming Bias?
The causes of his being in such a happy situation (is that better?) were clearly not the point here, and, quite frankly, I think you knew that.
But if you insist on an answer to this irrelevant rhetorical question, the answer is yes. Eliezer_2012 is indeed quite fortunate to have been preceded by all those previous Eliezers who did those things.
EY owns LessWrong
Then, like I implied, he should just admit to making a decision on the basis of his own personal preference (if indeed that’s what’s going on), instead of constructing a rationalization about the opinions of offline folks being somehow more important or “appropriately” filtered.
Eliezer_2012 is indeed quite fortunate to have been preceded by all those previous Eliezers who did those things.
Eliezer only got to be Eliezer_2012 by doing all those things. Now, maybe Eliezer_201209120 did wake up this morning, as every morning, and think, “how extraordinarily, astoundingly lucky I am to be me!”, and there is some point to that thought—but not one that is relevant to this conversation.
Then, like I implied, he should just admit to making a decision on the basis of his own personal preference (if indeed that’s what’s going on), instead of constructing a rationalization about the opinions of offline folks being somehow more important or “appropriately” filtered.
It is tautologically his preference. I see no reason to think he is being dishonest in his stated reasons for that preference.
I’m afraid the above comment does not contribute any additional information to this discussion, and so I have downvoted it accordingly. Any substantive reply would consist of the repetition of points already made.
It’s easier to leave a forum than a country. Forum-dictators who abuse their power end up with empty forums.
Real world dictators who abuse their power often end up dead. (But perhaps not as much as real world dictators who do not abuse their power enough to secure it.)
Perhaps I misunderstood what ArisKatsaris was saying. I thought he meant something like this:
Dictators in countries tend to make living conditions in those countries less desirable. Dictators in forums tend to make posting in those forums (and/or reading them) more desirable.
If this is true, your objection is somewhat tangential to the topic (though an empty forum is less desirable than an active one). But perhaps he meant something else ?
Just my own personal experience of how moderated vs non-moderated forums tend to go, and as for countries, likewise my impression of what countries seem nice to live in.
You’re probably right about modern countries; however, as far as I understand, historically some countries did reasonably well under a dictatorship. Life under Hammurabi was far from being all peaches and cream, but it was still relatively prosperous, compared to the surrounding nations. A few Caesars did a pretty good job of administering Rome; of course, their successors royally screwed the whole thing up. Likewise, life in Tzarist Russia went through its ups and downs (mostly downs, to be fair).
Unfortunately, the kind of a person who seeks (and is able to achieve) absolute power is usually exactly the kind of person who should be kept away from power if at all possible. I’ve seen this happen in forums, where the unofficial grounds for banning a user inevitably devolve into “he doesn’t agree with me”, and “I don’t like his face, virtually speaking”.
Right, but that doesn’t mean they tend to be beneficial, either. We’re not arguing over which dictator is the worst, but whether dictators in forums are diametrically opposed to their real-world cousins.
I’d like to point out that Overcoming Bias, back in the day, was a dictatorship: Robin and Eliezer were explicitly in total control. Whereas Less Wrong was explictly set up to be community-moderated, with voting taking the place of moderator censorship. And the general consensus has always been that LW was an improvement over OB.
Freedom is never a terminal value. If you dig a bit, you should be able to explain why freedom is important/essential in particular circumstances.
Ironically, the appearance of freedom can be a default terminal value for humans and some other animals, if you take evolutionary psychology seriously. Or, to be more accurate, the appearance of absence of imposed restrictions can be a default terminal value that receives positive reinforcement cookies in the brain of humans and some other animals. Claustrophobia seems to be a particular subset of this that automates the jump from certain types of restrictions through the whole mental process that leads to panic-mode.
The abstract concept of freedom and its reality referent pattern, however, would be extremely unlikely to end up as a terminal value, if only even for its sheer mathematical complexity.
I’d be cautious about saying something’s never a terminal value. Given my model of the EEA, it wouldn’t be terribly surprising to me if some set of people did have poor reactions to certain types of external constraint independently of their physical consequences, though “freedom” and its various antonyms seem too broad to capture the way I’d expect this to work.
Someone’s probably studied this, although I can’t dig up anything offhand.
I take back the “never” part, it is way too strong. What I meant to say is that the probability of someone proclaiming that freedom is her terminal value not having dug deep enough to find her true terminal values is extremely high.
(...) it wouldn’t be terribly surprising to me if some set of people did have poor reactions to certain types of external constraint independently of their physical consequences, (...)
Yes, I was commenting on this at the same time. The mental perception of restrictions, or the mental perception of absence of restrictions, can become a direct brainwired value through evolution, and is a simple step enough from other things already in there AFAICT. Freedom itself, however, independent of perception/observation and as a pattern of real interactions and decision choices and so on, seems far too complex to be something the brain would just randomly stumble upon in one go, especially only in some humans and not others.
See if you can replace “freedom” with its substance, and then evaluate whether that substance is something the human brain would be likely to just happen to, once in a while, find as a terminal, worth-in-itself value for some humans but not others, considering the complexity of this substance.
Yes, the mental node/label “freedom” can become a terminal value (a single mental node is certainly simple enough for evolution to stumble upon once in a while), but that’s directly related to a perception of absence of constraints or restrictions within a situation or context.
More complex values will not spontaneously form as terminal, built-in-brain values for animals that came into being through evolution. Evolution just doesn’t do that. Humans don’t rewire their brains and don’t reach into the Great Void of Light from the Beyond to randomly pick their terminal values.
Basically, the systematic absence of conceptual incentives and punishment-threats organized such as to funnel the possible decisions of a mind or set of minds towards a specific subset of possible actions (this is a simplified reduction of “freedom” which is still full of giant paintbrush handles) is not something a human mind would just accidentally happen to form a terminal value around (barring astronomical odds on the order of sun-explodes-next-second) without first developing terminal values around punishment-threats (which not all humans have, if any), decision tree sizes, and various other components of the very complex pattern we call “lack of freedom” (because lack of freedom is much easier to describe than freedom, and freedom is the absence or diminution of lack(s) of freedom).
I don’t see any evidence that a sufficient number of humans happen to have most of the prerequisite terminal values for there to be any specimen which has this complex construct as a terminal value.
As I said in a different comment, though, it’s very possible (and very likely) that the lighting-up of the mental node for freedom could be a terminal value, which feels from inside like freedom itself is a terminal value. However, the terminal value is really just the perception of things that light up the “freedom!” mental node, not the concept of freedom itself.
Once you try to describe “freedom” in terms that a program or algorithm could understand, you realize that it becomes extremely difficult for the program to even know whether there is freedom in something or not, and that it is an abstraction of multiple levels interacting at multiple scales in complex ways far, far above the building blocks of matter and reality, and which requires values and algorithms for a lot of other things. You can value the output of this computation as a terminal value, but not the whole “freedom” business.
A very clever person might be capable of tricking their own brain by abusing an already built-in terminal value on a freedom mental-node by hacking in safety-checks that will force them to shut up and multiply, using best possible algorithms to evaluate “real” freedom-or-no-freedom, and then light up the mental node based on that, but it would require lots of training and mind-hacking.
Hence, I maintain that it’s extremely unlikely that someone really has freedom itself as a terminal value, rather than feeling from inside like they value freedom. A bit of Bayes suggests I shouldn’t even pay attention to it in the space of possible hypotheses, because of the sheer amount of values that get false positives as being terminal due to feeling as such from inside versus the amount of known terminal values that have such a high level of complexity and interconnections between many patterns, reality-referents, indirect valuations, etc.
because lack of freedom is much easier to describe than freedom, and freedom is the absence or diminution of lack(s) of freedom
“Lack of freedom” can’t be significantly easier to describe than freedom—they differ by at most one bit.
No opinion on whether the mental node representing “freedom” or actual freedom is valued—that seems to suffer/benefit from all of the same issues as any other terminal value representing reality.
If someone tries to manacle me in a dungeon, I will perform great violence upon that person. I will give up food, water, shelter, and sleep to avoid it. I will sell prized possessions or great works of art if necessary to buy weapons to attack that person. I can’t think of a better way to describe what a terminal value feels like.
Manacling you in a dungeon also triggers your mental node for freedom and also triggers the appearance of restrictions and constraints, and more so you are the direct subject yourself. It lacks a control group and feels like a confirmation-biased experiment.
If I simply told you (and you have easy means of confirming that I’m telling the truth) that I’m restricting the movements of a dozen people you’ve never heard of, and the restriction of freedom is done in such a way that the “victims” will never even be aware that their freedoms are being restricted (e.g. giving a mental imperative to spend eight hours a day in a certain room with a denial-of-denial clause for it), would you still have the same intense this-is-wrong terminal value for no other reason than that their freedom is taken from them in some manner?
If so, why are employment contracts not making you panic in a constant stream of negative utility? Or compulsive education? Or prison? Or any other form of freedom reduction which you might not consider to be about “freedom” but which certainly fits most reductions of it?
Yes, I meant “freedom for me”—I thought that was implied.
If I simply told you (and you have easy means of confirming that I’m telling the truth) that I’m restricting the movements of a dozen people you’ve never heard of, and the restriction of freedom is done in such a way that the “victims” will never even be aware that their freedoms are being restricted (e.g. giving a mental imperative to vote republican with a denial-of-denial clause for it), would you still have the same intense this-is-wrong terminal value for no other reason than that their freedom is taken from them in some manner?
I would not want to be one of those people. If you convincingly told me that I was one of those people, I’d try to get out of it. If I was concerned about those people and thought they also valued freedom, I’d try to help them.
employment contracts
My employment can be terminated at will by either party. There are some oppressive labor laws that make this less the case, but they mostly favor me and neither myself nor my employer is going to call on them. What’s an “employment contract” and why would I want one?
compulsive education
Compulsory education is horrible. It’s profoundly illiberal and I believe it’s a violation of the constitutional amendment against slavery. I will not send my children to school and “over my dead body” is my response to anyone who intends to take them. I try to convince my friends not to send their children to school either.
prison
I don’t intend to go to prison and would fight to avoid it. If my friends were in prison, I’d do what I could to get them out.
I would not want to be one of those people. If you convincingly told me that I was one of those people, I’d try to get out of it. If I was concerned about those people and thought they also valued freedom, I’d try to help them.
...therefore, if you are never aware of your own lack of freedom, you do not assign value to this. Which loops around back to the appearance of freedom being your true value. This would be the most uncharitable interpretation.
It seems, however, that in general you will be taking the course of action which maximizes the visible freedom that you can perceive, rather than a course of action you know to be optimized in general for widescale freedom. It seems more like a cognitive alert to certain triggers, and a high value being placed on not triggering this particular alert, than valuing the principles.
Edit: Also, thanks for indulging my curiosity and for all your replies on this topic.
Would you sell possessions to buy weapons to attack a person would runs an online voluntary community who changes the rules without consulting anyone?
If the two situations are comparable, I think it’s important to know exactly why.
Also note that manacling you to a dungeon isn’t just eliminating your ability freely choose things arbitrarily, it’s preventing you from having satisfying relationships, access to good food, meaningful life’s work and other pleasures. Would you mind being in a prison that enabled you to do those things?
Would you mind being in a prison that enabled you to do those things?
Yes. If this were many years ago and I weren’t so conversant on the massive differences between the ways different humans see the world, I’d be very confused that you even had to ask that question.
Would you sell possessions to buy weapons to attack a person would runs an online voluntary community who changes the rules without consulting anyone?
No. There are other options. At the moment I’m still vainly hoping that Eliezer will see reason. I’m strongly considering just dropping out.
I feel like asking this question is wrong, but I want the information:
If I know that letting you have freedom will be hurtful (like, say, I tell you you’re going to get run over by a train, and you tell me you won’t, but I know that you’re in denial-of-denial and subconsciously seeking to walk on train tracks, and my only way to prevent your death is to manacle you in a dungeon for a few days), would you still consider the freedom terminally important? More important than the hurt? Which other values can be traded off? Would it be possible to figure out an exchange rate with enough analysis and experiment?
Yes. If this were many years ago and I weren’t so conversant on the massive differences between the ways different humans see the world, I’d be very confused that you even had to ask that question.
Regarding this, what if I told you “Earth was a giant prison all along. We just didn’t know. Also, no one built the prison, and no one is actively working to keep us in here—there never was a jailor in the first place, we were just born inside the prison cell. We’re just incapable of taking off the manacles on our own, since we’re already manacled.”? In fact, I do tell you this. It’s pretty much true that we’ve been prisoners of many, many things. Is your freedom node only triggered at the start of imprisonment, the taking away of a freedom once had? What if someone is born in the prison Raemon proposes? Is it still inherently wrong? Is it inherently wrong that we are stuck on Earth? If no, would it become inherently wrong if you knew that someone is deliberately keeping us here on Earth by actively preventing us from learning how to escape Earth?
The key point being: What is the key principle that triggers your “Freedom” light? The causal action that removes freedoms? The intentions behind the constraints?
It seems logical to me to assume that if you have freedom as a terminal value, then being able to do anything, anywhere, be anything, anyhow, anywhen, control time and space and the whole universe at will better than any god, without any possible restrictions or limitations of any kind, should be the Ultimately Most Supremely Good maximal possible utility optimization, and therefore reality and physics would be your worst possible Enemy, seeing as how it is currently the strongest Jailer than restricts and constrains you the most. I’m quite aware that this is hyperbole and most likely a strawman, but it is, to me, the only plausible prediction for a terminal value of yourself being free.
You’re right, this does answer most of my questions. I had made incorrect assumptions about what you would consider optimal.
After updates based on this, it now appears much more likely for me that you use terminal valuation of your freedom node such that it gets triggered by more rational algorithms that really do attempt to detect restrictions and constraints in more than mere feeling-of-control manner. Is this closer to how you would describe your value?
I’m still having trouble with the idea of considering a universe optimized for one’s own personal freedom as a best thing (I tend to by default think of how to optimize for collective sum utilities of sets of minds, rather than one). It is not what I expected.
True, and I don’t quite see where I implied this. If you’re referring to the optimal universe question, it seems quite trivial that if the universe literally acts according to your every will with no restrictions whatsoever, any other terminal values will instantly be fulfilled to their absolute maximal states (including unbounded values that can increase to infinity) along with adjustment of their referents (if that’s even relevant anymore).
No compromise is needed, since you’re free from the laws of logic and physics and whatever else might prevent you from tiling the entire universe with paperclips AND tiling the entire universe with giant copies of Eliezer’s mind.
So if that sort of freedom is a terminal value, this counterfactual universe trivially becomes the optimal target, since it’s basically whatever you would find to be your optimal universe regardless of any restrictions.
Sometimes freedom is a bother, and sometimes it’s a way to die quickly, and sometimes it’s essential for survival and that “good life” of yours (depending on what you mean by it). You can certainly come up with plenty of examples of each. I recommend you do before pronouncing that freedom is a terminal value for you.
This is a community blog. If your community has a dictator, you should overthrow him.
With the caveats:
If the dictator isn’t particularly noticed to be behaving in that kind of way it is probably not worth enforcing the principle. ie. It is fine for people to have the absolute power to do whatever they want regardless of the will of the people as long as they don’t actually use it. A similar principle would also apply if the President of the United States started issuing pardons for whatever he damn well pleased. If US television informs me correctly (and it may not) then he is technically allowed to do so but I don’t imagine that power would remain if it was used frequently for his own ends. (And I doubt it the reaction against excessive abuse of power would be limited to just not voting for him again.)
The ‘should’ is weak. ie. It applies all else being equal but with a huge “if it is convenient to do so and you haven’t got something else you’d rather do with your time” implied.
“If you see someone about to die and can save them, you should.”
Now, you might agree or disagree with this. But “If you see someone about to die and can save them, you should, if it is convenient to do so and you haven’t got something else you’d rather do with your time” seems more like disagreement to me.
I don’t think so. I agree with that statement, with the same caveats. If there are also 100 people about to die and I can save them instead, I should probably do so. I suppose it depends how morally-informed you think “something else you’d rather do with your time” is supposed to be.
it’s subject to the Loss Aversion effect where the dissatisfied speak up in much greater numbers
But Eliezer Yudkowsky, too, is subject to the loss aversion effect. Just as those dissatisfied with changes overweight change’s negative consequences, so does Eliezer Yudkowsky overweight his dissatisfaction with changes initiated by the “community.” (For example, increased tolerance of responding to “trolling.”)
Moreover, if you discount the result of votes on rules, why do you assume votes on other matters are more rational? The “community” uses votes on substantive postings to discern a group consensus. These votes are subject to the same misdirection through loss aversion as are procedural issues. If the community has taken a mistaken philosophical or scientific position, people who agree with that position will be biased to vote down postings that challenge that position, a change away from a favored position being a loss. (Those who agree with the newly espoused position will be less energized, since they weight their potential gain less than their opponents weigh their potential loss.)
If you think “voting” is so highly distorted that it fails to represent opinion, you should probably abolish it entirely.
True. For that to be an effective communication channel, there would need to be a control group. As for how to create that control group or run any sort of blind (let alone double-blind) testing… yeah, I have no idea. Definitely a problem.
ETA: By “I have no idea”, I mean “Let me find my five-minute clock and I’ll get back to you on this if anything comes up”.
So I thought for five minutes, then looked at what’s been done in other websites before.
The best I have is monthly surveys with randomized questions from a pool of stuff that matters for LessWrong (according to the current or then-current staff, I would presume) with a few community suggestions, and then possibly later implementation of a weighing algorithm for diminishing returns when multiple users with similar thread participation (e.g. two people that always post in the same thread) give similar feedback.
The second part is full of holes and horribly prone to “Death by Poking With Stick”, but an ideal implementation of this seems like it would get a lot more quality feedback than what little gets through low-bandwidth in-person conversations.
There are other, less practical (but possibly more accurate) alternatives, of course. Like picking random LW users every so often, appearing at their front door, giving them a brain-scan headset (e.g. an Emotiv Epoc), and having them wear the headset while being on LW so you can collect tons of data.
I’d stick with live feedback and simple surveys to begin with.
Let me see if I understand you correctly: if someone cares about how Less Wrong is run, what they should do is not comment on Less Wrong—least of all in discussions on Less Wrong about how Less Wrong is run (“meta threads”). Instead, what they should do is move to California and start attending Alicorn’s dinner parties.
Have I got that right?
That’s how politics usually works, yes.
Can we call this the social availability heuristic?
Also, you have to attend dinner parties on a day when Eliezer is invited and doesn’t decline due to being on a weird diet that week.
Don’t worry, I’m sure that venue’s attendees are selected neutrally.
All you have to do is run into me in any venue whatsoever where the attendees weren’t filtered by their interest in meta threads. :)
But now that you’ve stated this, you have the ability to rationalize any future IRL meta discussion...
Can “Direct email, skype or text-chat communications to E.Y.” count as a venue? Purely out of curiosity.
The problem is that if you initiate it, it’s subject to the Loss Aversion effect where the dissatisfied speak up in much greater numbers.
I don’t see what this has to do with “loss aversion” (the phenomenon where people think losing a dollar is worse than failing to gain a dollar they could have gained), though that’s of course a tangential matter.
The point here is—and I say this with all due respect—it looks to me like you’re rationalizing a decision made for other reasons. What’s really going on here, it seems to me, is that, since you’re lucky enough to be part of a physical community of “similar” people (in which, of course, you happen to have high status), your brain thinks they are the ones who “really matter”—as opposed to abstract characters on the internet who weren’t part of the ancestral environment (and who never fail to critique you whenever they can).
That doesn’t change the fact that this is is an online community, and as such, is for us abstract characters, not your real-life dinner companions. You should be taking advice from the latter about running this site to about the same extent that Alicorn should be taking advice from this site about how to run her dinner parties.
Do you have advice on how to run my dinner parties?
Vaniver and DaFranker have both offered sensible, practical, down-to-earth advice. I, on the other hand, have one word for you: Airship.
Not plastics?
Consider eating Roman-style to increase the intimacy / as a novel experience. Unfortunately, this is made way easier with specialized furniture- but you should be able to improvise with pillows. As well, it is a radically different way to eat that predates the invention of the fork (and so will work fine with hands or chopsticks, but not modern implements).
Consider seating logistics, and experiment with having different people decide who sits where (or next to whom). Dinner parties tend to turn out differently with different arrangements, but different subcultures will have different algorithms for establishing optimal seating, so the experimentation is usually necessary (and having different people decide serves both as a form of blinding and as a way to turn up evidence to isolate the algorithm faster).
Huh, I haven’t been assigning seats at all except for reserving the one with easiest kitchen access for myself. I’ve just been herding people towards the dining table.
Was Eliezer “lucky” to have cofounded the Singularity Institute and Overcoming Bias? “Lucky” to have written the Sequences? “Lucky” to have founded LessWrong? “Lucky” to have found kindred minds, both online and in meatspace? Does he just “happen” to be among them?
Or has he, rather, searched them out and created communities for them to come together?
The online community of LessWrong does not own LessWrong. EY owns LessWrong, or some combination of EY, the SI, and whatever small number of other people they choose to share the running of the place with. To a limited extent it is for us, but its governance is not at all by us, and it wouldn’t be LessWrong if it was. The system of government here is enlightened absolutism.
The causes of his being in such a happy situation (is that better?) were clearly not the point here, and, quite frankly, I think you knew that.
But if you insist on an answer to this irrelevant rhetorical question, the answer is yes. Eliezer_2012 is indeed quite fortunate to have been preceded by all those previous Eliezers who did those things.
Then, like I implied, he should just admit to making a decision on the basis of his own personal preference (if indeed that’s what’s going on), instead of constructing a rationalization about the opinions of offline folks being somehow more important or “appropriately” filtered.
I would replace preference with hypothesis of what constitutes the optimal rationality-refining community.
They are sensibly the same, but I find the latter to be a more useful reduction that is more open to being refined in turn.
Eliezer only got to be Eliezer_2012 by doing all those things. Now, maybe Eliezer_201209120 did wake up this morning, as every morning, and think, “how extraordinarily, astoundingly lucky I am to be me!”, and there is some point to that thought—but not one that is relevant to this conversation.
It is tautologically his preference. I see no reason to think he is being dishonest in his stated reasons for that preference.
I’m afraid the above comment does not contribute any additional information to this discussion, and so I have downvoted it accordingly. Any substantive reply would consist of the repetition of points already made.
You’re welcome.
This is a community blog. If your community has a dictator, you should overthrow him.
Is the overthrowing of dictators a terminal value to you, or is it that you associate it with good consequences?
A little of both. Freedom is a terminal value, and heuristically dictators cause bad consequences.
My own view: Dictators in countries tend to cause bad consequences. Dictators in forums tend to cause good consequences.
Do you have any evidence for that ? In my experience, it all depends on the dictator, not on the venue.
It’s easier to leave a forum than a country. Forum-dictators who abuse their power end up with empty forums.
Real world dictators who abuse their power often end up dead. (But perhaps not as much as real world dictators who do not abuse their power enough to secure it.)
Not as often as you seem to think.
Perhaps I misunderstood what ArisKatsaris was saying. I thought he meant something like this:
If this is true, your objection is somewhat tangential to the topic (though an empty forum is less desirable than an active one). But perhaps he meant something else ?
Since it’s easier to leave, a dictator in a forum has more motivation not to abuse his power.
Just my own personal experience of how moderated vs non-moderated forums tend to go, and as for countries, likewise my impression of what countries seem nice to live in.
You’re probably right about modern countries; however, as far as I understand, historically some countries did reasonably well under a dictatorship. Life under Hammurabi was far from being all peaches and cream, but it was still relatively prosperous, compared to the surrounding nations. A few Caesars did a pretty good job of administering Rome; of course, their successors royally screwed the whole thing up. Likewise, life in Tzarist Russia went through its ups and downs (mostly downs, to be fair).
Unfortunately, the kind of a person who seeks (and is able to achieve) absolute power is usually exactly the kind of person who should be kept away from power if at all possible. I’ve seen this happen in forums, where the unofficial grounds for banning a user inevitably devolve into “he doesn’t agree with me”, and “I don’t like his face, virtually speaking”.
“Dictators” in forums can’t kill people or hold them hostage.
Right, but that doesn’t mean they tend to be beneficial, either. We’re not arguing over which dictator is the worst, but whether dictators in forums are diametrically opposed to their real-world cousins.
I’d like to point out that Overcoming Bias, back in the day, was a dictatorship: Robin and Eliezer were explicitly in total control. Whereas Less Wrong was explictly set up to be community-moderated, with voting taking the place of moderator censorship. And the general consensus has always been that LW was an improvement over OB.
Freedom is never a terminal value. If you dig a bit, you should be able to explain why freedom is important/essential in particular circumstances.
Ironically, the appearance of freedom can be a default terminal value for humans and some other animals, if you take evolutionary psychology seriously. Or, to be more accurate, the appearance of absence of imposed restrictions can be a default terminal value that receives positive reinforcement cookies in the brain of humans and some other animals. Claustrophobia seems to be a particular subset of this that automates the jump from certain types of restrictions through the whole mental process that leads to panic-mode.
The abstract concept of freedom and its reality referent pattern, however, would be extremely unlikely to end up as a terminal value, if only even for its sheer mathematical complexity.
I agree with this.
I’d be cautious about saying something’s never a terminal value. Given my model of the EEA, it wouldn’t be terribly surprising to me if some set of people did have poor reactions to certain types of external constraint independently of their physical consequences, though “freedom” and its various antonyms seem too broad to capture the way I’d expect this to work.
Someone’s probably studied this, although I can’t dig up anything offhand.
I take back the “never” part, it is way too strong. What I meant to say is that the probability of someone proclaiming that freedom is her terminal value not having dug deep enough to find her true terminal values is extremely high.
That seems reasonable. Especially given how often freedom gets used as an applause light.
Yes, I was commenting on this at the same time. The mental perception of restrictions, or the mental perception of absence of restrictions, can become a direct brainwired value through evolution, and is a simple step enough from other things already in there AFAICT. Freedom itself, however, independent of perception/observation and as a pattern of real interactions and decision choices and so on, seems far too complex to be something the brain would just randomly stumble upon in one go, especially only in some humans and not others.
I agree that freedom is an instrumental value. I disagree that it is never a terminal value. It is constitutive of the good life.
See if you can replace “freedom” with its substance, and then evaluate whether that substance is something the human brain would be likely to just happen to, once in a while, find as a terminal, worth-in-itself value for some humans but not others, considering the complexity of this substance.
Yes, the mental node/label “freedom” can become a terminal value (a single mental node is certainly simple enough for evolution to stumble upon once in a while), but that’s directly related to a perception of absence of constraints or restrictions within a situation or context.
I don’t see what you’re getting at here. All terminal values are agent-specific.
More complex values will not spontaneously form as terminal, built-in-brain values for animals that came into being through evolution. Evolution just doesn’t do that. Humans don’t rewire their brains and don’t reach into the Great Void of Light from the Beyond to randomly pick their terminal values.
Basically, the systematic absence of conceptual incentives and punishment-threats organized such as to funnel the possible decisions of a mind or set of minds towards a specific subset of possible actions (this is a simplified reduction of “freedom” which is still full of giant paintbrush handles) is not something a human mind would just accidentally happen to form a terminal value around (barring astronomical odds on the order of sun-explodes-next-second) without first developing terminal values around punishment-threats (which not all humans have, if any), decision tree sizes, and various other components of the very complex pattern we call “lack of freedom” (because lack of freedom is much easier to describe than freedom, and freedom is the absence or diminution of lack(s) of freedom).
I don’t see any evidence that a sufficient number of humans happen to have most of the prerequisite terminal values for there to be any specimen which has this complex construct as a terminal value.
As I said in a different comment, though, it’s very possible (and very likely) that the lighting-up of the mental node for freedom could be a terminal value, which feels from inside like freedom itself is a terminal value. However, the terminal value is really just the perception of things that light up the “freedom!” mental node, not the concept of freedom itself.
Once you try to describe “freedom” in terms that a program or algorithm could understand, you realize that it becomes extremely difficult for the program to even know whether there is freedom in something or not, and that it is an abstraction of multiple levels interacting at multiple scales in complex ways far, far above the building blocks of matter and reality, and which requires values and algorithms for a lot of other things. You can value the output of this computation as a terminal value, but not the whole “freedom” business.
A very clever person might be capable of tricking their own brain by abusing an already built-in terminal value on a freedom mental-node by hacking in safety-checks that will force them to shut up and multiply, using best possible algorithms to evaluate “real” freedom-or-no-freedom, and then light up the mental node based on that, but it would require lots of training and mind-hacking.
Hence, I maintain that it’s extremely unlikely that someone really has freedom itself as a terminal value, rather than feeling from inside like they value freedom. A bit of Bayes suggests I shouldn’t even pay attention to it in the space of possible hypotheses, because of the sheer amount of values that get false positives as being terminal due to feeling as such from inside versus the amount of known terminal values that have such a high level of complexity and interconnections between many patterns, reality-referents, indirect valuations, etc.
“Lack of freedom” can’t be significantly easier to describe than freedom—they differ by at most one bit.
No opinion on whether the mental node representing “freedom” or actual freedom is valued—that seems to suffer/benefit from all of the same issues as any other terminal value representing reality.
If someone tries to manacle me in a dungeon, I will perform great violence upon that person. I will give up food, water, shelter, and sleep to avoid it. I will sell prized possessions or great works of art if necessary to buy weapons to attack that person. I can’t think of a better way to describe what a terminal value feels like.
Manacling you in a dungeon also triggers your mental node for freedom and also triggers the appearance of restrictions and constraints, and more so you are the direct subject yourself. It lacks a control group and feels like a confirmation-biased experiment.
If I simply told you (and you have easy means of confirming that I’m telling the truth) that I’m restricting the movements of a dozen people you’ve never heard of, and the restriction of freedom is done in such a way that the “victims” will never even be aware that their freedoms are being restricted (e.g. giving a mental imperative to spend eight hours a day in a certain room with a denial-of-denial clause for it), would you still have the same intense this-is-wrong terminal value for no other reason than that their freedom is taken from them in some manner?
If so, why are employment contracts not making you panic in a constant stream of negative utility? Or compulsive education? Or prison? Or any other form of freedom reduction which you might not consider to be about “freedom” but which certainly fits most reductions of it?
Yes, I meant “freedom for me”—I thought that was implied.
I would not want to be one of those people. If you convincingly told me that I was one of those people, I’d try to get out of it. If I was concerned about those people and thought they also valued freedom, I’d try to help them.
My employment can be terminated at will by either party. There are some oppressive labor laws that make this less the case, but they mostly favor me and neither myself nor my employer is going to call on them. What’s an “employment contract” and why would I want one?
Compulsory education is horrible. It’s profoundly illiberal and I believe it’s a violation of the constitutional amendment against slavery. I will not send my children to school and “over my dead body” is my response to anyone who intends to take them. I try to convince my friends not to send their children to school either.
I don’t intend to go to prison and would fight to avoid it. If my friends were in prison, I’d do what I could to get them out.
...therefore, if you are never aware of your own lack of freedom, you do not assign value to this. Which loops around back to the appearance of freedom being your true value. This would be the most uncharitable interpretation.
It seems, however, that in general you will be taking the course of action which maximizes the visible freedom that you can perceive, rather than a course of action you know to be optimized in general for widescale freedom. It seems more like a cognitive alert to certain triggers, and a high value being placed on not triggering this particular alert, than valuing the principles.
Edit: Also, thanks for indulging my curiosity and for all your replies on this topic.
Would you sell possessions to buy weapons to attack a person would runs an online voluntary community who changes the rules without consulting anyone?
If the two situations are comparable, I think it’s important to know exactly why.
Also note that manacling you to a dungeon isn’t just eliminating your ability freely choose things arbitrarily, it’s preventing you from having satisfying relationships, access to good food, meaningful life’s work and other pleasures. Would you mind being in a prison that enabled you to do those things?
Yes. If this were many years ago and I weren’t so conversant on the massive differences between the ways different humans see the world, I’d be very confused that you even had to ask that question.
No. There are other options. At the moment I’m still vainly hoping that Eliezer will see reason. I’m strongly considering just dropping out.
I feel like asking this question is wrong, but I want the information:
If I know that letting you have freedom will be hurtful (like, say, I tell you you’re going to get run over by a train, and you tell me you won’t, but I know that you’re in denial-of-denial and subconsciously seeking to walk on train tracks, and my only way to prevent your death is to manacle you in a dungeon for a few days), would you still consider the freedom terminally important? More important than the hurt? Which other values can be traded off? Would it be possible to figure out an exchange rate with enough analysis and experiment?
Regarding this, what if I told you “Earth was a giant prison all along. We just didn’t know. Also, no one built the prison, and no one is actively working to keep us in here—there never was a jailor in the first place, we were just born inside the prison cell. We’re just incapable of taking off the manacles on our own, since we’re already manacled.”? In fact, I do tell you this. It’s pretty much true that we’ve been prisoners of many, many things. Is your freedom node only triggered at the start of imprisonment, the taking away of a freedom once had? What if someone is born in the prison Raemon proposes? Is it still inherently wrong? Is it inherently wrong that we are stuck on Earth? If no, would it become inherently wrong if you knew that someone is deliberately keeping us here on Earth by actively preventing us from learning how to escape Earth?
The key point being: What is the key principle that triggers your “Freedom” light? The causal action that removes freedoms? The intentions behind the constraints?
It seems logical to me to assume that if you have freedom as a terminal value, then being able to do anything, anywhere, be anything, anyhow, anywhen, control time and space and the whole universe at will better than any god, without any possible restrictions or limitations of any kind, should be the Ultimately Most Supremely Good maximal possible utility optimization, and therefore reality and physics would be your worst possible Enemy, seeing as how it is currently the strongest Jailer than restricts and constrains you the most. I’m quite aware that this is hyperbole and most likely a strawman, but it is, to me, the only plausible prediction for a terminal value of yourself being free.
This should answer most of the questions above. Yes, the universe is terrible. It would be much better if the universe were optimized for my freedom.
All values are fungible. The exchange rate is not easily inspected, and thought experiments are probably no good for figuring them out.
You’re right, this does answer most of my questions. I had made incorrect assumptions about what you would consider optimal.
After updates based on this, it now appears much more likely for me that you use terminal valuation of your freedom node such that it gets triggered by more rational algorithms that really do attempt to detect restrictions and constraints in more than mere feeling-of-control manner. Is this closer to how you would describe your value?
I’m still having trouble with the idea of considering a universe optimized for one’s own personal freedom as a best thing (I tend to by default think of how to optimize for collective sum utilities of sets of minds, rather than one). It is not what I expected.
“freedom as a terminal value” != “freedom as the only terminal value”
True, and I don’t quite see where I implied this. If you’re referring to the optimal universe question, it seems quite trivial that if the universe literally acts according to your every will with no restrictions whatsoever, any other terminal values will instantly be fulfilled to their absolute maximal states (including unbounded values that can increase to infinity) along with adjustment of their referents (if that’s even relevant anymore).
No compromise is needed, since you’re free from the laws of logic and physics and whatever else might prevent you from tiling the entire universe with paperclips AND tiling the entire universe with giant copies of Eliezer’s mind.
So if that sort of freedom is a terminal value, this counterfactual universe trivially becomes the optimal target, since it’s basically whatever you would find to be your optimal universe regardless of any restrictions.
Sometimes freedom is a bother, and sometimes it’s a way to die quickly, and sometimes it’s essential for survival and that “good life” of yours (depending on what you mean by it). You can certainly come up with plenty of examples of each. I recommend you do before pronouncing that freedom is a terminal value for you.
With the caveats:
If the dictator isn’t particularly noticed to be behaving in that kind of way it is probably not worth enforcing the principle. ie. It is fine for people to have the absolute power to do whatever they want regardless of the will of the people as long as they don’t actually use it. A similar principle would also apply if the President of the United States started issuing pardons for whatever he damn well pleased. If US television informs me correctly (and it may not) then he is technically allowed to do so but I don’t imagine that power would remain if it was used frequently for his own ends. (And I doubt it the reaction against excessive abuse of power would be limited to just not voting for him again.)
The ‘should’ is weak. ie. It applies all else being equal but with a huge “if it is convenient to do so and you haven’t got something else you’d rather do with your time” implied.
Agreed. With the caveat that I think all ’should’s are that weak.
“If you see someone about to die and can save them, you should.”
Now, you might agree or disagree with this. But “If you see someone about to die and can save them, you should, if it is convenient to do so and you haven’t got something else you’d rather do with your time” seems more like disagreement to me.
I don’t think so. I agree with that statement, with the same caveats. If there are also 100 people about to die and I can save them instead, I should probably do so. I suppose it depends how morally-informed you think “something else you’d rather do with your time” is supposed to be.
But Eliezer Yudkowsky, too, is subject to the loss aversion effect. Just as those dissatisfied with changes overweight change’s negative consequences, so does Eliezer Yudkowsky overweight his dissatisfaction with changes initiated by the “community.” (For example, increased tolerance of responding to “trolling.”)
Moreover, if you discount the result of votes on rules, why do you assume votes on other matters are more rational? The “community” uses votes on substantive postings to discern a group consensus. These votes are subject to the same misdirection through loss aversion as are procedural issues. If the community has taken a mistaken philosophical or scientific position, people who agree with that position will be biased to vote down postings that challenge that position, a change away from a favored position being a loss. (Those who agree with the newly espoused position will be less energized, since they weight their potential gain less than their opponents weigh their potential loss.)
If you think “voting” is so highly distorted that it fails to represent opinion, you should probably abolish it entirely.
True. For that to be an effective communication channel, there would need to be a control group. As for how to create that control group or run any sort of blind (let alone double-blind) testing… yeah, I have no idea. Definitely a problem.
ETA: By “I have no idea”, I mean “Let me find my five-minute clock and I’ll get back to you on this if anything comes up”.
So I thought for five minutes, then looked at what’s been done in other websites before.
The best I have is monthly surveys with randomized questions from a pool of stuff that matters for LessWrong (according to the current or then-current staff, I would presume) with a few community suggestions, and then possibly later implementation of a weighing algorithm for diminishing returns when multiple users with similar thread participation (e.g. two people that always post in the same thread) give similar feedback.
The second part is full of holes and horribly prone to “Death by Poking With Stick”, but an ideal implementation of this seems like it would get a lot more quality feedback than what little gets through low-bandwidth in-person conversations.
There are other, less practical (but possibly more accurate) alternatives, of course. Like picking random LW users every so often, appearing at their front door, giving them a brain-scan headset (e.g. an Emotiv Epoc), and having them wear the headset while being on LW so you can collect tons of data.
I’d stick with live feedback and simple surveys to begin with.