Manipulation is when you have a specific outcome in mind and exert power on a system to move into that outcome. Chaperones don’t have any idea of how the want the final shape of a protein to look like.
It’s possible for a protein to fold into the lowest-energy fold without a chaperone, it’s just that frequently the pressure that exist inside the intercellular fluid get a protein to misfold. For proteins misfolding has a straightfoward definition, it’s derivation from the lowest-energy fold and that definition is not dependent on the existence of chaperones. Early on in evolution there were no chaperones and cells had to deal with having more misfolded proteins.
Nonmanipulation is about allowing self-regulation processes to determine the resulting shape instead of a therapist working towards the patient changing into the shape that the the therapist things would be better for the client.
Not sure I follow—chaperones don’t seem complex enough to have intent at all, so by that definition they are non-manipulative in the same sense that rocks are—it’s a concept that doesn’t apply to them, not something they could do and choose not to.
That’s a big contrast with human communication—there is definitely intent behind every communication. For this kind of action, the selective removal of forces seems near-indistinguishable from the selective addition of force in order to enable/influence some change. It feels like there’s a naturalistic fallacy going on—some underlying belief that what happens in a vacuum is better than what happens in a real equilibrium.
On this site, concerns about the (theoretical) manipulative abilities of superhuman AI seem to be mentioned fairly often. (Facebook algorithms/worry about twitter (usually more about observed results/mechanism design-ish) come up less often, but are mentioned.)
What do you want out of the algorithms you interact with, say underlying social media?
Manipulation is when you have a specific outcome in mind and exert power on a system to move into that outcome. Chaperones don’t have any idea of how the want the final shape of a protein to look like.
It’s possible for a protein to fold into the lowest-energy fold without a chaperone, it’s just that frequently the pressure that exist inside the intercellular fluid get a protein to misfold. For proteins misfolding has a straightfoward definition, it’s derivation from the lowest-energy fold and that definition is not dependent on the existence of chaperones. Early on in evolution there were no chaperones and cells had to deal with having more misfolded proteins.
Nonmanipulation is about allowing self-regulation processes to determine the resulting shape instead of a therapist working towards the patient changing into the shape that the the therapist things would be better for the client.
Not sure I follow—chaperones don’t seem complex enough to have intent at all, so by that definition they are non-manipulative in the same sense that rocks are—it’s a concept that doesn’t apply to them, not something they could do and choose not to.
That’s a big contrast with human communication—there is definitely intent behind every communication. For this kind of action, the selective removal of forces seems near-indistinguishable from the selective addition of force in order to enable/influence some change. It feels like there’s a naturalistic fallacy going on—some underlying belief that what happens in a vacuum is better than what happens in a real equilibrium.
What about “communication” with a program?
On this site, concerns about the (theoretical) manipulative abilities of superhuman AI seem to be mentioned fairly often. (Facebook algorithms/worry about twitter (usually more about observed results/mechanism design-ish) come up less often, but are mentioned.)
What do you want out of the algorithms you interact with, say underlying social media?