Where planet actually goes has nothing to do with where it should go. Shouldness is about preference, and you said nothing of preference in your example. If the planet is on collision course with Earth, I say that it should turn the other way (and it could if an appropriate system was placed in interaction with it).
And that would be shouldness with respect to you, not the planet. I submit that you’re making the mind-projection fallacy here.
In the Eliezer_Yudkowsky article “Possibility and Couldness”, it (or some other article in the series) identifies “shouldness” as the algorithm’s internal recognition of a state it ranks higher in wanting to bring about. So I can in fact map that concept onto the planet, in that it identifies and acts on the preference for moving as per the laws of motion and gravitation.
This doesn’t capture the concept of an error. Preference should also be seen as an abstract mathematical object which the algorithm doesn’t necessarily maximize, but tries to set as high as it can. Of course, if I talk of shouldness, I must refer to particular preference, in this case I referred to mine. Notice that if I can’t move the planet away, it in fact collides with Earth, but it doesn’t mean that it should collide with Earth according to my preference. Likewise, you can’t assert that according to the planet’s preference, it should collide with Earth merely from the fact that it does: maybe the planet wants to be a whale instead, but can’t.
Preference should also be seen as an abstract mathematical object which the algorithm doesn’t necessarily maximize, but tries to set as high as it can.
Right, it maximizes according to constrains. And?
Notice that if I can’t move the planet away, it in fact collides with Earth, but it doesn’t mean that it should collide with Earth according to my preference
Right, your preference is different from the planet’s. That was your error in your last response.
Likewise, you can’t assert that according to the planet’s preference, it should collide with Earth merely from the fact that it does: maybe the planet wants to be a whale instead, but can’t.
The planet doesn’t want to be a whale; that wouldn’t minimize its Gibbs Free Energy in its local domain of attraction.
Notice that if I can’t move the planet away, it in fact collides with Earth, but it doesn’t mean that it should collide with Earth according to my preference
Right, your preference is different from the planet’s. That was your error in your last response.
My preference is over everything, the planet included. By saying “the planet shouldn’t collide with Earth” I mean that I should make the planet not collide with Earth, I’m not talking about the preference of the planet in this sentence, I only talk about my preference.
Likewise, you can’t assert that according to the planet’s preference, it should collide with Earth merely from the fact that it does: maybe the planet wants to be a whale instead, but can’t.
The planet doesn’t want to be a whale; that wouldn’t minimize its Gibbs Free Energy in its local domain of attraction.
That planet wants to be a whale is a hypothetical. Accept it in reading what depends on accepting it. If the planet does in fact wants to be a whale, it can still be unable to make that happen, and you may still observe it moving along its orbit. You can’t assert that it doesn’t want to be a whale from extensionally observing how it actually moves.
You are confusing the variational formulation of laws of physics with preference of optimization processes (probably because in both cases, you maximize/minimize something). Optimization process actually optimizes stuff over time (at least on simpler stages, e.g. humans), while variational form of the laws of physics just says that the true solution (that describes what will actually happen) can be represented as the maximum/minimum of a certain function, given the constraints. This is just a convenient form for finding approximate solutions and for understanding the system’s properties.
The same factual outcome can be written as the maximum of many different functions under different constraints. One of the functions for which you can seek an extremum given constraints describes the behavior of the system on the level of physics (for example, using principle of least action; I forgot my physics, but it doesn’t look like Gibbs free energy applies to motion of planets). A completely different function for which you can seek an extremem given constraints describes its behavior on the level of preference—that’s utility. Both these accounts give the same solution stating what will actually happen, but the functions are different.
“Shouldness” refers to a particular very specific way of presenting the system’s behavior, and it’s not free energy. Notice that you can describe AI’s or man’s behavior with physical variational principles as well, but that will have nothing to do with their preference.
“Shouldness” refers to a particular very specific way of presenting the system’s >behavior, and it’s not free energy. Notice that you can describe AI’s or man’s behavior >with physical variational principles as well, but that will have nothing to do with their >preference.
It seems to me that what SilasBarta is asking for here is a definition of shouldness such that the above statement holds. Why is it invalid to think that the system “wants” its physics? All you are indicating is that such is not what’s intended (which I’m sure SilasBarta knows)...
As far as variational principles go, one difference is that a physical system displays no preference among the different local extrema. (IIRC you can even come up with models where the same system will minimize (an) action for some initial conditions and maximize it for others.) This makes a Lagrangian-style physical system a pretty poor CSA even if you go out of your way to model it as one.
Nothing singles out a particular variational formulation of physical laws as preference, among all the other equivalent formulations. Stating that the planet wants to minimize its action or whatever is as arbitrary as saying that it wants to be a whale. Silas Barta was asserting that “free energy” is the answer, which seems to be wrong on multiple accounts.
Stating that the planet wants to minimize its action or whatever is as arbitrary as saying that it wants to be a whale. Silas Barta was asserting that “free energy” is the answer
No, I wasn’t, but I couldn’t even follow what your point was, once you started equating your own “shouldness” with the planet’s shouldness, as if that implied some kind of contradiction if they’re different. So, I didn’t follow up.
The point was, if indeed we are all fully deterministic, and planets are fully deterministic, and planets embody the laws of physics, the concept of “shouldness” must be equally applicable in both cases. (More generally, I can’t distinguish “agent” type algorithms from “non-agent” type algorithms, so I don’t know what the alternative is.)
You “could jump off that cliff, if you wanted to.” But as Eliezer_Yudkowsky notes in the link above, this statement is completely consistent with “It is physically impossible that you will jump off that cliff.” Because the “causal forces within physics that are you” cannot reach that state.
And there’s the kicker: that situation is no different from that of a planet: whatever it “wishes”, it’s physically impossible to do anything but follow the path dictated by physics.
My point about free energy was just to a) do a simple “reality check” (not the only check you can do) that would justify saying “the planet doesn’t want to be a whale”, and b) that every system will minimize its free energy with respect to a local domain of attraction. Just like how water will flow downhill spontaneously, but it won’t jump out of a basin, just because that can get it even further downhill.
Now, in the sense that people can “want the impossible”, then yes, I have no evidence that a planet doesn’t want to be a whale. What I perhaps should have said is, a planet has not identified being a whale as the goal or subgoal it is in pursuit of. Even taking this reasoning to the extreme, the very first steps toward becoming a whale, would immediately hit the hard limits of free energy minimization, and so the planet could never even begin such a path—not viewed as a single entity.
Now, in the sense that people can “want the impossible”, then yes, I have no evidence that a planet doesn’t want to be a whale.
Yup, that’s the case. This concept is meaningful because sometimes unexpected opportunities appear and the predictably impossible turns into an option. Or, more constructively, this concept is required to implement external “help” that is known in advance to be welcome.
Where planet actually goes has nothing to do with where it should go. Shouldness is about preference, and you said nothing of preference in your example. If the planet is on collision course with Earth, I say that it should turn the other way (and it could if an appropriate system was placed in interaction with it).
And that would be shouldness with respect to you, not the planet. I submit that you’re making the mind-projection fallacy here.
In the Eliezer_Yudkowsky article “Possibility and Couldness”, it (or some other article in the series) identifies “shouldness” as the algorithm’s internal recognition of a state it ranks higher in wanting to bring about. So I can in fact map that concept onto the planet, in that it identifies and acts on the preference for moving as per the laws of motion and gravitation.
This doesn’t capture the concept of an error. Preference should also be seen as an abstract mathematical object which the algorithm doesn’t necessarily maximize, but tries to set as high as it can. Of course, if I talk of shouldness, I must refer to particular preference, in this case I referred to mine. Notice that if I can’t move the planet away, it in fact collides with Earth, but it doesn’t mean that it should collide with Earth according to my preference. Likewise, you can’t assert that according to the planet’s preference, it should collide with Earth merely from the fact that it does: maybe the planet wants to be a whale instead, but can’t.
Right, it maximizes according to constrains. And?
Right, your preference is different from the planet’s. That was your error in your last response.
The planet doesn’t want to be a whale; that wouldn’t minimize its Gibbs Free Energy in its local domain of attraction.
My preference is over everything, the planet included. By saying “the planet shouldn’t collide with Earth” I mean that I should make the planet not collide with Earth, I’m not talking about the preference of the planet in this sentence, I only talk about my preference.
That planet wants to be a whale is a hypothetical. Accept it in reading what depends on accepting it. If the planet does in fact wants to be a whale, it can still be unable to make that happen, and you may still observe it moving along its orbit. You can’t assert that it doesn’t want to be a whale from extensionally observing how it actually moves.
You are confusing the variational formulation of laws of physics with preference of optimization processes (probably because in both cases, you maximize/minimize something). Optimization process actually optimizes stuff over time (at least on simpler stages, e.g. humans), while variational form of the laws of physics just says that the true solution (that describes what will actually happen) can be represented as the maximum/minimum of a certain function, given the constraints. This is just a convenient form for finding approximate solutions and for understanding the system’s properties.
The same factual outcome can be written as the maximum of many different functions under different constraints. One of the functions for which you can seek an extremum given constraints describes the behavior of the system on the level of physics (for example, using principle of least action; I forgot my physics, but it doesn’t look like Gibbs free energy applies to motion of planets). A completely different function for which you can seek an extremem given constraints describes its behavior on the level of preference—that’s utility. Both these accounts give the same solution stating what will actually happen, but the functions are different.
“Shouldness” refers to a particular very specific way of presenting the system’s behavior, and it’s not free energy. Notice that you can describe AI’s or man’s behavior with physical variational principles as well, but that will have nothing to do with their preference.
It seems to me that what SilasBarta is asking for here is a definition of shouldness such that the above statement holds. Why is it invalid to think that the system “wants” its physics? All you are indicating is that such is not what’s intended (which I’m sure SilasBarta knows)...
As far as variational principles go, one difference is that a physical system displays no preference among the different local extrema. (IIRC you can even come up with models where the same system will minimize (an) action for some initial conditions and maximize it for others.) This makes a Lagrangian-style physical system a pretty poor CSA even if you go out of your way to model it as one.
CSAs can’t escape local optima either … unless you found your global optimum without telling us ;-)
Nothing singles out a particular variational formulation of physical laws as preference, among all the other equivalent formulations. Stating that the planet wants to minimize its action or whatever is as arbitrary as saying that it wants to be a whale. Silas Barta was asserting that “free energy” is the answer, which seems to be wrong on multiple accounts.
No, I wasn’t, but I couldn’t even follow what your point was, once you started equating your own “shouldness” with the planet’s shouldness, as if that implied some kind of contradiction if they’re different. So, I didn’t follow up.
The point was, if indeed we are all fully deterministic, and planets are fully deterministic, and planets embody the laws of physics, the concept of “shouldness” must be equally applicable in both cases. (More generally, I can’t distinguish “agent” type algorithms from “non-agent” type algorithms, so I don’t know what the alternative is.)
You “could jump off that cliff, if you wanted to.” But as Eliezer_Yudkowsky notes in the link above, this statement is completely consistent with “It is physically impossible that you will jump off that cliff.” Because the “causal forces within physics that are you” cannot reach that state.
And there’s the kicker: that situation is no different from that of a planet: whatever it “wishes”, it’s physically impossible to do anything but follow the path dictated by physics.
My point about free energy was just to a) do a simple “reality check” (not the only check you can do) that would justify saying “the planet doesn’t want to be a whale”, and b) that every system will minimize its free energy with respect to a local domain of attraction. Just like how water will flow downhill spontaneously, but it won’t jump out of a basin, just because that can get it even further downhill.
Now, in the sense that people can “want the impossible”, then yes, I have no evidence that a planet doesn’t want to be a whale. What I perhaps should have said is, a planet has not identified being a whale as the goal or subgoal it is in pursuit of. Even taking this reasoning to the extreme, the very first steps toward becoming a whale, would immediately hit the hard limits of free energy minimization, and so the planet could never even begin such a path—not viewed as a single entity.
Yup, that’s the case. This concept is meaningful because sometimes unexpected opportunities appear and the predictably impossible turns into an option. Or, more constructively, this concept is required to implement external “help” that is known in advance to be welcome.