(1) « people liking thing does not seem like a relevant parameter of design ».
This is quite a bold statement. I personally believe the mainstream theory according to which it’s easier to have designs adopted when they are liked by the adopters.
(2) Nice objection, and the observation of complex life forms gives a potential answer :
All healthy multicellular cells obey Apoptosis.
Apoptosis literally is « suicide in a way that’s easy to recycle because the organism asks you » (the source of the request can be internal via mitochondria, or external, generally leucocytes).
Given that all your cells welcome even literal kill-switch, and replacement, I firmly believe that they don’t mind surveillance either!
In complex multicellular life, the Cells that refuse surveillance, replacement, or Apoptosis, are the Cancerous Cells, and they don’t seem able to create any complex life form (Only parasitic life forms, feeding off of their host, and sometimes spreading and infecting others, like HeLa).
(1) « people liking thing does not seem like a relevant parameter of design ».
This is quite a bold statement. I personally believe the mainstream theory according to which it’s easier to have designs adopted when they are liked by the adopters.
Fair point. I guess “not relevant” is a too strong phrasing. And it would have been more accurate to say something like “people liking things might be neither sufficient nor necessary to get designs adopted, and it is not clear (definitely at least to me) how much it matters compared to other aspects”.
Re (2): Interesting. I would be curious to know to what extent this is just a surface-level-only metaphor, or unjustified antrophomorphisation of cells, vs actually having implications for AI design. (But I don’t understand biology at all, so I don’t really have a clue :( .)
(1) « Liking », or « desire » can be defined as « All other things equal, Agents will go to what they Desire/Like most, whenever given a choice ». Individual desire/liking/tastes vary.
(2) In Evolutionary Game Theory, in a Game where a Mitochondria-like Agent offers you choice between :
(Join eukaryotes) mutualistic endosymbiosis, at the cost of obeying apoptosis, or being flagged as Cancerous enemy
(Non eukaryotes) refusal of this offer, at the cost of being treated by the Eukariotes as a threat, or a lesser symbiote.
then that Agent is likely to win. To a rational agent, it’s a winning wager. My last publication expands on this.
(1) « people liking thing does not seem like a relevant parameter of design ».
This is quite a bold statement. I personally believe the mainstream theory according to which it’s easier to have designs adopted when they are liked by the adopters.
(2) Nice objection, and the observation of complex life forms gives a potential answer :
All healthy multicellular cells obey Apoptosis.
Apoptosis literally is « suicide in a way that’s easy to recycle because the organism asks you » (the source of the request can be internal via mitochondria, or external, generally leucocytes).
Given that all your cells welcome even literal kill-switch, and replacement, I firmly believe that they don’t mind surveillance either!
In complex multicellular life, the Cells that refuse surveillance, replacement, or Apoptosis, are the Cancerous Cells, and they don’t seem able to create any complex life form (Only parasitic life forms, feeding off of their host, and sometimes spreading and infecting others, like HeLa).
Fair point. I guess “not relevant” is a too strong phrasing. And it would have been more accurate to say something like “people liking things might be neither sufficient nor necessary to get designs adopted, and it is not clear (definitely at least to me) how much it matters compared to other aspects”.
Re (2): Interesting. I would be curious to know to what extent this is just a surface-level-only metaphor, or unjustified antrophomorphisation of cells, vs actually having implications for AI design. (But I don’t understand biology at all, so I don’t really have a clue :( .)
(1) « Liking », or « desire » can be defined as « All other things equal, Agents will go to what they Desire/Like most, whenever given a choice ». Individual desire/liking/tastes vary.
(2) In Evolutionary Game Theory, in a Game where a Mitochondria-like Agent offers you choice between :
(Join eukaryotes) mutualistic endosymbiosis, at the cost of obeying apoptosis, or being flagged as Cancerous enemy
(Non eukaryotes) refusal of this offer, at the cost of being treated by the Eukariotes as a threat, or a lesser symbiote.
then that Agent is likely to win. To a rational agent, it’s a winning wager. My last publication expands on this.