More complex values will not spontaneously form as terminal, built-in-brain values for animals that came into being through evolution. Evolution just doesn’t do that. Humans don’t rewire their brains and don’t reach into the Great Void of Light from the Beyond to randomly pick their terminal values.
Basically, the systematic absence of conceptual incentives and punishment-threats organized such as to funnel the possible decisions of a mind or set of minds towards a specific subset of possible actions (this is a simplified reduction of “freedom” which is still full of giant paintbrush handles) is not something a human mind would just accidentally happen to form a terminal value around (barring astronomical odds on the order of sun-explodes-next-second) without first developing terminal values around punishment-threats (which not all humans have, if any), decision tree sizes, and various other components of the very complex pattern we call “lack of freedom” (because lack of freedom is much easier to describe than freedom, and freedom is the absence or diminution of lack(s) of freedom).
I don’t see any evidence that a sufficient number of humans happen to have most of the prerequisite terminal values for there to be any specimen which has this complex construct as a terminal value.
As I said in a different comment, though, it’s very possible (and very likely) that the lighting-up of the mental node for freedom could be a terminal value, which feels from inside like freedom itself is a terminal value. However, the terminal value is really just the perception of things that light up the “freedom!” mental node, not the concept of freedom itself.
Once you try to describe “freedom” in terms that a program or algorithm could understand, you realize that it becomes extremely difficult for the program to even know whether there is freedom in something or not, and that it is an abstraction of multiple levels interacting at multiple scales in complex ways far, far above the building blocks of matter and reality, and which requires values and algorithms for a lot of other things. You can value the output of this computation as a terminal value, but not the whole “freedom” business.
A very clever person might be capable of tricking their own brain by abusing an already built-in terminal value on a freedom mental-node by hacking in safety-checks that will force them to shut up and multiply, using best possible algorithms to evaluate “real” freedom-or-no-freedom, and then light up the mental node based on that, but it would require lots of training and mind-hacking.
Hence, I maintain that it’s extremely unlikely that someone really has freedom itself as a terminal value, rather than feeling from inside like they value freedom. A bit of Bayes suggests I shouldn’t even pay attention to it in the space of possible hypotheses, because of the sheer amount of values that get false positives as being terminal due to feeling as such from inside versus the amount of known terminal values that have such a high level of complexity and interconnections between many patterns, reality-referents, indirect valuations, etc.
because lack of freedom is much easier to describe than freedom, and freedom is the absence or diminution of lack(s) of freedom
“Lack of freedom” can’t be significantly easier to describe than freedom—they differ by at most one bit.
No opinion on whether the mental node representing “freedom” or actual freedom is valued—that seems to suffer/benefit from all of the same issues as any other terminal value representing reality.
If someone tries to manacle me in a dungeon, I will perform great violence upon that person. I will give up food, water, shelter, and sleep to avoid it. I will sell prized possessions or great works of art if necessary to buy weapons to attack that person. I can’t think of a better way to describe what a terminal value feels like.
Manacling you in a dungeon also triggers your mental node for freedom and also triggers the appearance of restrictions and constraints, and more so you are the direct subject yourself. It lacks a control group and feels like a confirmation-biased experiment.
If I simply told you (and you have easy means of confirming that I’m telling the truth) that I’m restricting the movements of a dozen people you’ve never heard of, and the restriction of freedom is done in such a way that the “victims” will never even be aware that their freedoms are being restricted (e.g. giving a mental imperative to spend eight hours a day in a certain room with a denial-of-denial clause for it), would you still have the same intense this-is-wrong terminal value for no other reason than that their freedom is taken from them in some manner?
If so, why are employment contracts not making you panic in a constant stream of negative utility? Or compulsive education? Or prison? Or any other form of freedom reduction which you might not consider to be about “freedom” but which certainly fits most reductions of it?
Yes, I meant “freedom for me”—I thought that was implied.
If I simply told you (and you have easy means of confirming that I’m telling the truth) that I’m restricting the movements of a dozen people you’ve never heard of, and the restriction of freedom is done in such a way that the “victims” will never even be aware that their freedoms are being restricted (e.g. giving a mental imperative to vote republican with a denial-of-denial clause for it), would you still have the same intense this-is-wrong terminal value for no other reason than that their freedom is taken from them in some manner?
I would not want to be one of those people. If you convincingly told me that I was one of those people, I’d try to get out of it. If I was concerned about those people and thought they also valued freedom, I’d try to help them.
employment contracts
My employment can be terminated at will by either party. There are some oppressive labor laws that make this less the case, but they mostly favor me and neither myself nor my employer is going to call on them. What’s an “employment contract” and why would I want one?
compulsive education
Compulsory education is horrible. It’s profoundly illiberal and I believe it’s a violation of the constitutional amendment against slavery. I will not send my children to school and “over my dead body” is my response to anyone who intends to take them. I try to convince my friends not to send their children to school either.
prison
I don’t intend to go to prison and would fight to avoid it. If my friends were in prison, I’d do what I could to get them out.
I would not want to be one of those people. If you convincingly told me that I was one of those people, I’d try to get out of it. If I was concerned about those people and thought they also valued freedom, I’d try to help them.
...therefore, if you are never aware of your own lack of freedom, you do not assign value to this. Which loops around back to the appearance of freedom being your true value. This would be the most uncharitable interpretation.
It seems, however, that in general you will be taking the course of action which maximizes the visible freedom that you can perceive, rather than a course of action you know to be optimized in general for widescale freedom. It seems more like a cognitive alert to certain triggers, and a high value being placed on not triggering this particular alert, than valuing the principles.
Edit: Also, thanks for indulging my curiosity and for all your replies on this topic.
Would you sell possessions to buy weapons to attack a person would runs an online voluntary community who changes the rules without consulting anyone?
If the two situations are comparable, I think it’s important to know exactly why.
Also note that manacling you to a dungeon isn’t just eliminating your ability freely choose things arbitrarily, it’s preventing you from having satisfying relationships, access to good food, meaningful life’s work and other pleasures. Would you mind being in a prison that enabled you to do those things?
Would you mind being in a prison that enabled you to do those things?
Yes. If this were many years ago and I weren’t so conversant on the massive differences between the ways different humans see the world, I’d be very confused that you even had to ask that question.
Would you sell possessions to buy weapons to attack a person would runs an online voluntary community who changes the rules without consulting anyone?
No. There are other options. At the moment I’m still vainly hoping that Eliezer will see reason. I’m strongly considering just dropping out.
I feel like asking this question is wrong, but I want the information:
If I know that letting you have freedom will be hurtful (like, say, I tell you you’re going to get run over by a train, and you tell me you won’t, but I know that you’re in denial-of-denial and subconsciously seeking to walk on train tracks, and my only way to prevent your death is to manacle you in a dungeon for a few days), would you still consider the freedom terminally important? More important than the hurt? Which other values can be traded off? Would it be possible to figure out an exchange rate with enough analysis and experiment?
Yes. If this were many years ago and I weren’t so conversant on the massive differences between the ways different humans see the world, I’d be very confused that you even had to ask that question.
Regarding this, what if I told you “Earth was a giant prison all along. We just didn’t know. Also, no one built the prison, and no one is actively working to keep us in here—there never was a jailor in the first place, we were just born inside the prison cell. We’re just incapable of taking off the manacles on our own, since we’re already manacled.”? In fact, I do tell you this. It’s pretty much true that we’ve been prisoners of many, many things. Is your freedom node only triggered at the start of imprisonment, the taking away of a freedom once had? What if someone is born in the prison Raemon proposes? Is it still inherently wrong? Is it inherently wrong that we are stuck on Earth? If no, would it become inherently wrong if you knew that someone is deliberately keeping us here on Earth by actively preventing us from learning how to escape Earth?
The key point being: What is the key principle that triggers your “Freedom” light? The causal action that removes freedoms? The intentions behind the constraints?
It seems logical to me to assume that if you have freedom as a terminal value, then being able to do anything, anywhere, be anything, anyhow, anywhen, control time and space and the whole universe at will better than any god, without any possible restrictions or limitations of any kind, should be the Ultimately Most Supremely Good maximal possible utility optimization, and therefore reality and physics would be your worst possible Enemy, seeing as how it is currently the strongest Jailer than restricts and constrains you the most. I’m quite aware that this is hyperbole and most likely a strawman, but it is, to me, the only plausible prediction for a terminal value of yourself being free.
You’re right, this does answer most of my questions. I had made incorrect assumptions about what you would consider optimal.
After updates based on this, it now appears much more likely for me that you use terminal valuation of your freedom node such that it gets triggered by more rational algorithms that really do attempt to detect restrictions and constraints in more than mere feeling-of-control manner. Is this closer to how you would describe your value?
I’m still having trouble with the idea of considering a universe optimized for one’s own personal freedom as a best thing (I tend to by default think of how to optimize for collective sum utilities of sets of minds, rather than one). It is not what I expected.
True, and I don’t quite see where I implied this. If you’re referring to the optimal universe question, it seems quite trivial that if the universe literally acts according to your every will with no restrictions whatsoever, any other terminal values will instantly be fulfilled to their absolute maximal states (including unbounded values that can increase to infinity) along with adjustment of their referents (if that’s even relevant anymore).
No compromise is needed, since you’re free from the laws of logic and physics and whatever else might prevent you from tiling the entire universe with paperclips AND tiling the entire universe with giant copies of Eliezer’s mind.
So if that sort of freedom is a terminal value, this counterfactual universe trivially becomes the optimal target, since it’s basically whatever you would find to be your optimal universe regardless of any restrictions.
More complex values will not spontaneously form as terminal, built-in-brain values for animals that came into being through evolution. Evolution just doesn’t do that. Humans don’t rewire their brains and don’t reach into the Great Void of Light from the Beyond to randomly pick their terminal values.
Basically, the systematic absence of conceptual incentives and punishment-threats organized such as to funnel the possible decisions of a mind or set of minds towards a specific subset of possible actions (this is a simplified reduction of “freedom” which is still full of giant paintbrush handles) is not something a human mind would just accidentally happen to form a terminal value around (barring astronomical odds on the order of sun-explodes-next-second) without first developing terminal values around punishment-threats (which not all humans have, if any), decision tree sizes, and various other components of the very complex pattern we call “lack of freedom” (because lack of freedom is much easier to describe than freedom, and freedom is the absence or diminution of lack(s) of freedom).
I don’t see any evidence that a sufficient number of humans happen to have most of the prerequisite terminal values for there to be any specimen which has this complex construct as a terminal value.
As I said in a different comment, though, it’s very possible (and very likely) that the lighting-up of the mental node for freedom could be a terminal value, which feels from inside like freedom itself is a terminal value. However, the terminal value is really just the perception of things that light up the “freedom!” mental node, not the concept of freedom itself.
Once you try to describe “freedom” in terms that a program or algorithm could understand, you realize that it becomes extremely difficult for the program to even know whether there is freedom in something or not, and that it is an abstraction of multiple levels interacting at multiple scales in complex ways far, far above the building blocks of matter and reality, and which requires values and algorithms for a lot of other things. You can value the output of this computation as a terminal value, but not the whole “freedom” business.
A very clever person might be capable of tricking their own brain by abusing an already built-in terminal value on a freedom mental-node by hacking in safety-checks that will force them to shut up and multiply, using best possible algorithms to evaluate “real” freedom-or-no-freedom, and then light up the mental node based on that, but it would require lots of training and mind-hacking.
Hence, I maintain that it’s extremely unlikely that someone really has freedom itself as a terminal value, rather than feeling from inside like they value freedom. A bit of Bayes suggests I shouldn’t even pay attention to it in the space of possible hypotheses, because of the sheer amount of values that get false positives as being terminal due to feeling as such from inside versus the amount of known terminal values that have such a high level of complexity and interconnections between many patterns, reality-referents, indirect valuations, etc.
“Lack of freedom” can’t be significantly easier to describe than freedom—they differ by at most one bit.
No opinion on whether the mental node representing “freedom” or actual freedom is valued—that seems to suffer/benefit from all of the same issues as any other terminal value representing reality.
If someone tries to manacle me in a dungeon, I will perform great violence upon that person. I will give up food, water, shelter, and sleep to avoid it. I will sell prized possessions or great works of art if necessary to buy weapons to attack that person. I can’t think of a better way to describe what a terminal value feels like.
Manacling you in a dungeon also triggers your mental node for freedom and also triggers the appearance of restrictions and constraints, and more so you are the direct subject yourself. It lacks a control group and feels like a confirmation-biased experiment.
If I simply told you (and you have easy means of confirming that I’m telling the truth) that I’m restricting the movements of a dozen people you’ve never heard of, and the restriction of freedom is done in such a way that the “victims” will never even be aware that their freedoms are being restricted (e.g. giving a mental imperative to spend eight hours a day in a certain room with a denial-of-denial clause for it), would you still have the same intense this-is-wrong terminal value for no other reason than that their freedom is taken from them in some manner?
If so, why are employment contracts not making you panic in a constant stream of negative utility? Or compulsive education? Or prison? Or any other form of freedom reduction which you might not consider to be about “freedom” but which certainly fits most reductions of it?
Yes, I meant “freedom for me”—I thought that was implied.
I would not want to be one of those people. If you convincingly told me that I was one of those people, I’d try to get out of it. If I was concerned about those people and thought they also valued freedom, I’d try to help them.
My employment can be terminated at will by either party. There are some oppressive labor laws that make this less the case, but they mostly favor me and neither myself nor my employer is going to call on them. What’s an “employment contract” and why would I want one?
Compulsory education is horrible. It’s profoundly illiberal and I believe it’s a violation of the constitutional amendment against slavery. I will not send my children to school and “over my dead body” is my response to anyone who intends to take them. I try to convince my friends not to send their children to school either.
I don’t intend to go to prison and would fight to avoid it. If my friends were in prison, I’d do what I could to get them out.
...therefore, if you are never aware of your own lack of freedom, you do not assign value to this. Which loops around back to the appearance of freedom being your true value. This would be the most uncharitable interpretation.
It seems, however, that in general you will be taking the course of action which maximizes the visible freedom that you can perceive, rather than a course of action you know to be optimized in general for widescale freedom. It seems more like a cognitive alert to certain triggers, and a high value being placed on not triggering this particular alert, than valuing the principles.
Edit: Also, thanks for indulging my curiosity and for all your replies on this topic.
Would you sell possessions to buy weapons to attack a person would runs an online voluntary community who changes the rules without consulting anyone?
If the two situations are comparable, I think it’s important to know exactly why.
Also note that manacling you to a dungeon isn’t just eliminating your ability freely choose things arbitrarily, it’s preventing you from having satisfying relationships, access to good food, meaningful life’s work and other pleasures. Would you mind being in a prison that enabled you to do those things?
Yes. If this were many years ago and I weren’t so conversant on the massive differences between the ways different humans see the world, I’d be very confused that you even had to ask that question.
No. There are other options. At the moment I’m still vainly hoping that Eliezer will see reason. I’m strongly considering just dropping out.
I feel like asking this question is wrong, but I want the information:
If I know that letting you have freedom will be hurtful (like, say, I tell you you’re going to get run over by a train, and you tell me you won’t, but I know that you’re in denial-of-denial and subconsciously seeking to walk on train tracks, and my only way to prevent your death is to manacle you in a dungeon for a few days), would you still consider the freedom terminally important? More important than the hurt? Which other values can be traded off? Would it be possible to figure out an exchange rate with enough analysis and experiment?
Regarding this, what if I told you “Earth was a giant prison all along. We just didn’t know. Also, no one built the prison, and no one is actively working to keep us in here—there never was a jailor in the first place, we were just born inside the prison cell. We’re just incapable of taking off the manacles on our own, since we’re already manacled.”? In fact, I do tell you this. It’s pretty much true that we’ve been prisoners of many, many things. Is your freedom node only triggered at the start of imprisonment, the taking away of a freedom once had? What if someone is born in the prison Raemon proposes? Is it still inherently wrong? Is it inherently wrong that we are stuck on Earth? If no, would it become inherently wrong if you knew that someone is deliberately keeping us here on Earth by actively preventing us from learning how to escape Earth?
The key point being: What is the key principle that triggers your “Freedom” light? The causal action that removes freedoms? The intentions behind the constraints?
It seems logical to me to assume that if you have freedom as a terminal value, then being able to do anything, anywhere, be anything, anyhow, anywhen, control time and space and the whole universe at will better than any god, without any possible restrictions or limitations of any kind, should be the Ultimately Most Supremely Good maximal possible utility optimization, and therefore reality and physics would be your worst possible Enemy, seeing as how it is currently the strongest Jailer than restricts and constrains you the most. I’m quite aware that this is hyperbole and most likely a strawman, but it is, to me, the only plausible prediction for a terminal value of yourself being free.
This should answer most of the questions above. Yes, the universe is terrible. It would be much better if the universe were optimized for my freedom.
All values are fungible. The exchange rate is not easily inspected, and thought experiments are probably no good for figuring them out.
You’re right, this does answer most of my questions. I had made incorrect assumptions about what you would consider optimal.
After updates based on this, it now appears much more likely for me that you use terminal valuation of your freedom node such that it gets triggered by more rational algorithms that really do attempt to detect restrictions and constraints in more than mere feeling-of-control manner. Is this closer to how you would describe your value?
I’m still having trouble with the idea of considering a universe optimized for one’s own personal freedom as a best thing (I tend to by default think of how to optimize for collective sum utilities of sets of minds, rather than one). It is not what I expected.
“freedom as a terminal value” != “freedom as the only terminal value”
True, and I don’t quite see where I implied this. If you’re referring to the optimal universe question, it seems quite trivial that if the universe literally acts according to your every will with no restrictions whatsoever, any other terminal values will instantly be fulfilled to their absolute maximal states (including unbounded values that can increase to infinity) along with adjustment of their referents (if that’s even relevant anymore).
No compromise is needed, since you’re free from the laws of logic and physics and whatever else might prevent you from tiling the entire universe with paperclips AND tiling the entire universe with giant copies of Eliezer’s mind.
So if that sort of freedom is a terminal value, this counterfactual universe trivially becomes the optimal target, since it’s basically whatever you would find to be your optimal universe regardless of any restrictions.