I agree that the general point (biology needs to address similar issues, so we can use it for inspiration) is interesting. (Seems related to https://www.pibbss.ai/ .)
That said, I am somewhat doubtful about the implied conclusion (that this is likely to help with AI, because it won’t mind): (1) there are already many workspace practices that people don’t like, so “people liking things” doesn’t seem like a relevant parameter of design, (2) (this is totally vague, handwavy, and possibly untrue, but:) biological processes might also not “like” being surveiled, replaced, etc, so the argument proves too much.
(1) « people liking thing does not seem like a relevant parameter of design ».
This is quite a bold statement. I personally believe the mainstream theory according to which it’s easier to have designs adopted when they are liked by the adopters.
(2) Nice objection, and the observation of complex life forms gives a potential answer :
All healthy multicellular cells obey Apoptosis.
Apoptosis literally is « suicide in a way that’s easy to recycle because the organism asks you » (the source of the request can be internal via mitochondria, or external, generally leucocytes).
Given that all your cells welcome even literal kill-switch, and replacement, I firmly believe that they don’t mind surveillance either!
In complex multicellular life, the Cells that refuse surveillance, replacement, or Apoptosis, are the Cancerous Cells, and they don’t seem able to create any complex life form (Only parasitic life forms, feeding off of their host, and sometimes spreading and infecting others, like HeLa).
(1) « people liking thing does not seem like a relevant parameter of design ».
This is quite a bold statement. I personally believe the mainstream theory according to which it’s easier to have designs adopted when they are liked by the adopters.
Fair point. I guess “not relevant” is a too strong phrasing. And it would have been more accurate to say something like “people liking things might be neither sufficient nor necessary to get designs adopted, and it is not clear (definitely at least to me) how much it matters compared to other aspects”.
Re (2): Interesting. I would be curious to know to what extent this is just a surface-level-only metaphor, or unjustified antrophomorphisation of cells, vs actually having implications for AI design. (But I don’t understand biology at all, so I don’t really have a clue :( .)
(1) « Liking », or « desire » can be defined as « All other things equal, Agents will go to what they Desire/Like most, whenever given a choice ». Individual desire/liking/tastes vary.
(2) In Evolutionary Game Theory, in a Game where a Mitochondria-like Agent offers you choice between :
(Join eukaryotes) mutualistic endosymbiosis, at the cost of obeying apoptosis, or being flagged as Cancerous enemy
(Non eukaryotes) refusal of this offer, at the cost of being treated by the Eukariotes as a threat, or a lesser symbiote.
then that Agent is likely to win. To a rational agent, it’s a winning wager. My last publication expands on this.
Evolution gives us many organically designed Systems which offer potential Solutions:
white blood cells move everywhere to spot and kill defective cells with literal kill-switch (apoptosis)
A team of Leucocytes (white blood cells):
organically checking the whole organisation at all levels
scanning for signs of amorality/misalignment or any other error
flagging for surveillance, giving warnings, or sending to a restorative justice process depending on gravity
agents themselves held to the highest standard
This is a system that can be implemented in a Company and fix most of the recursive middle manager hell.
Many humans would not like that (real accountability is hard, especially for middle-men who benefit from status quo), but AI won’t mind.
So the AI “head” could send their Leucocytes check all the system routinely.
I agree that the general point (biology needs to address similar issues, so we can use it for inspiration) is interesting. (Seems related to https://www.pibbss.ai/ .)
That said, I am somewhat doubtful about the implied conclusion (that this is likely to help with AI, because it won’t mind): (1) there are already many workspace practices that people don’t like, so “people liking things” doesn’t seem like a relevant parameter of design, (2) (this is totally vague, handwavy, and possibly untrue, but:) biological processes might also not “like” being surveiled, replaced, etc, so the argument proves too much.
(1) « people liking thing does not seem like a relevant parameter of design ».
This is quite a bold statement. I personally believe the mainstream theory according to which it’s easier to have designs adopted when they are liked by the adopters.
(2) Nice objection, and the observation of complex life forms gives a potential answer :
All healthy multicellular cells obey Apoptosis.
Apoptosis literally is « suicide in a way that’s easy to recycle because the organism asks you » (the source of the request can be internal via mitochondria, or external, generally leucocytes).
Given that all your cells welcome even literal kill-switch, and replacement, I firmly believe that they don’t mind surveillance either!
In complex multicellular life, the Cells that refuse surveillance, replacement, or Apoptosis, are the Cancerous Cells, and they don’t seem able to create any complex life form (Only parasitic life forms, feeding off of their host, and sometimes spreading and infecting others, like HeLa).
Fair point. I guess “not relevant” is a too strong phrasing. And it would have been more accurate to say something like “people liking things might be neither sufficient nor necessary to get designs adopted, and it is not clear (definitely at least to me) how much it matters compared to other aspects”.
Re (2): Interesting. I would be curious to know to what extent this is just a surface-level-only metaphor, or unjustified antrophomorphisation of cells, vs actually having implications for AI design. (But I don’t understand biology at all, so I don’t really have a clue :( .)
(1) « Liking », or « desire » can be defined as « All other things equal, Agents will go to what they Desire/Like most, whenever given a choice ». Individual desire/liking/tastes vary.
(2) In Evolutionary Game Theory, in a Game where a Mitochondria-like Agent offers you choice between :
(Join eukaryotes) mutualistic endosymbiosis, at the cost of obeying apoptosis, or being flagged as Cancerous enemy
(Non eukaryotes) refusal of this offer, at the cost of being treated by the Eukariotes as a threat, or a lesser symbiote.
then that Agent is likely to win. To a rational agent, it’s a winning wager. My last publication expands on this.