Getting traction on the deontic feasibility hypothesis
Davidad believes that using formalisms such as Markov Blankets would be crucial in encoding the desiderata that the AI should not cross boundary lines at various levels of the world-model. We only need to “imply high probability of existential safety”, so according to davidad, “we do not need to load much ethics or aesthetics in order to satisfy this claim (e.g. we probably do not get to use OAA to make sure people don’t die of cancer, because cancer takes place inside the Markov Blanket, and that would conflict with boundary preservation; but it would work to make sure people don’t die of violence or pandemics)”. Discussing this hypothesis more thoroughly seems important.
I think any finitely-specified deontology wouldn’t ensure existential safety, and even more likely following just a finite deontology (such as “don’t interfere with others’ boundaries”) can lead to a dystopian scenario for humanity.
In my current meta-ethical view, ethics is a style of behaviour (i.e., dynamics of a physical system) that is inferred by the system (or its supra-system, such as in the course of genetic or cultural evolution). The style could be characterised/described in the context of multiple different (or, perhaps infinitely many) modelling frameworks/theories for describing the dynamics of the system (perhaps, on various levels of description). Examples of such modelling frameworks are “raw” neural dynamics/connectomics (note: this is already a modelling framework, not the “bare” reality!), Bayesian Brain/Active Inference, Reinforcement Learning, cognitive psychology, evolutionary game theory, etc. All these theories would lead to somewhat different descriptions of the same behaviour which don’t completely cover each other[1].
It seems easy to find counterexamples when intruding into someone’s boundaries is an ethical thing to do and obtaining from that would be highly unethical. Sorting out multilevel conflicts/frustrations between infinitely many system/boundary partitions of the world[2] in the context of infinitely many theoretical frameworks (such as quantum mechanics[3], neural network framework[4], theory of conscious agents[5], etc.) should guide the attenuation of the best ethical style that we (AI agents) can attain, but I think it couldn’t nearly be captured by a single deontic rule.
Vanchurin, V., Wolf, Y. I., Katsnelson, M. I., & Koonin, E. V. (2022). Toward a theory of evolution as multilevel learning. Proceedings of the National Academy of Sciences, 119(6), e2120037119. https://doi.org/10.1073/pnas.2120037119
Fields, C., Friston, K., Glazebrook, J. F., & Levin, M. (2022). A free energy principle for generic quantum systems. Progress in Biophysics and Molecular Biology, 173, 36–59. https://doi.org/10.1016/j.pbiomolbio.2022.05.006
Okay, I’ll try to summarize your main points. Please let me know if this is right
You think «membranes» will not be able to be formalized in a consistent way, especially in a way that is consistent across different levels of modeling
“It seems easy to find counterexamples when intruding into someone’s boundaries is an ethical thing to do and obtaining from that would be highly unethical.”
Have I missed anything? I’ll respond after you confirm.
Also, would you please share any key example(s) of #2?
You think «membranes» will not be able to be formalized in a consistent way, especially in a way that is consistent across different levels of modeling
No, I think membranes could be formalised (Markov blankets, objective “joints” of the environment as in https://arxiv.org/abs/2303.01514, etc.; though theory-laden, I think that the “diff” between the boundaries identifiable from the perspective of different theories is usually negligible).
We, humans, intrude into each others’ boundaries, boundaries of animals, organisations, communities, etc. all the time. A surgeon intruding into the boundaries of a patient is an ethical thing to do. If AI automated the entire economy, then waited until humanity completely loses the ability to run the civilisation on their own, and then suddenly stopped any maintenance of the automated systems that support the lives of humans and sees how humans die out because they cannot support themselves would be “respecting humans’ boundaries”, but would also be an evil treacherous turn. Messing with Hitler’s boundaries (i.e., killing him) in 1940 would be an ethical action from the perspective of most systems that may care about that (individual humans, organisations, countries, communities).
I think that boundaries (including consciousness boundaries: what is the locus of animal consciousness? Just the brain or the whole body, or it even extends beyond the body? What is the locus of AI’s consciousness?) is an undeniably important concept that is usable for inferring ethical behaviour. But I don’t think a simple “winning” deontology is derivable from this concept. I’m currently preparing an article where I describe that from the AI engineering perspective, deontology, virtue ethics, and consequentialism could be seen as engineering techniques (approaches) that could help to produce and continuously infer the ethical style of behaviour. None of these “classical” approaches to normative ethics is either necessary or sufficient, but they all could help to improve the ethics in some cognitive architectures.
I think that boundaries […] is an undeniably important concept that is usable for inferring ethical behaviour. But I don’t think a simple “winning” deontology is derivable from this concept.
I see
I’m currently preparing an article where I describe that from the AI engineering perspective, deontology, virtue ethics, and consequentialism
please lmk when you post this. i’ve subscribed to your lw posts too
FWIW, I don’t think the examples given necessarily break «membranes» as a “winning” deontological theory.
A surgeon intruding into the boundaries of a patient is an ethical thing to do.
If the patient has consented, there is no conflict.
(Important note: consent does not always nullify membrane violations. In this case it does, but there are many cases where it doesn’t.)
If AI automated the entire economy, then waited until humanity completely loses the ability to run the civilisation on their own, and then suddenly stopped any maintenance of the automated systems that support the lives of humans and sees how humans die out because they cannot support themselves would be “respecting humans’ boundaries”, but would also be an evil treacherous turn.
I think a way to properly understand this might be.. If Alice makes a promise to Bob, she is essentially giving Bob a piece of herself, and that changes how he plans for the future and whatnot. If she revokes that by terms not part of the original agreement, she has stolen something from Bob, and that is a violation of membranes. ?
If the AI promises to support humans under an agreement, then breaks that agreement, that is theft.
Messing with Hitler’s boundaries (i.e., killing him) in 1940 would be an ethical action from the perspective of most systems that may care about that (individual humans, organisations, countries, communities).
In a case like this I wonder if the theory would also need something like “minimize net boundary violations”, kind of like how some deontologies make murder okay sometimes.
But then this gets really close to utilitarianism and that’s gross imo. So I’m not sure. Maybe there’s another way to address this? Maybe I see what you mean
I think any finitely-specified deontology wouldn’t ensure existential safety, and even more likely following just a finite deontology (such as “don’t interfere with others’ boundaries”) can lead to a dystopian scenario for humanity.
In my current meta-ethical view, ethics is a style of behaviour (i.e., dynamics of a physical system) that is inferred by the system (or its supra-system, such as in the course of genetic or cultural evolution). The style could be characterised/described in the context of multiple different (or, perhaps infinitely many) modelling frameworks/theories for describing the dynamics of the system (perhaps, on various levels of description). Examples of such modelling frameworks are “raw” neural dynamics/connectomics (note: this is already a modelling framework, not the “bare” reality!), Bayesian Brain/Active Inference, Reinforcement Learning, cognitive psychology, evolutionary game theory, etc. All these theories would lead to somewhat different descriptions of the same behaviour which don’t completely cover each other[1].
It seems easy to find counterexamples when intruding into someone’s boundaries is an ethical thing to do and obtaining from that would be highly unethical. Sorting out multilevel conflicts/frustrations between infinitely many system/boundary partitions of the world[2] in the context of infinitely many theoretical frameworks (such as quantum mechanics[3], neural network framework[4], theory of conscious agents[5], etc.) should guide the attenuation of the best ethical style that we (AI agents) can attain, but I think it couldn’t nearly be captured by a single deontic rule.
However, in “Mathematical Foundations for a Compositional Account of the Bayesian Brain” (2022), Smithe establishes that it might be possible to formally convert between these frameworks using category theory.
Vanchurin, V., Wolf, Y. I., Katsnelson, M. I., & Koonin, E. V. (2022). Toward a theory of evolution as multilevel learning. Proceedings of the National Academy of Sciences, 119(6), e2120037119. https://doi.org/10.1073/pnas.2120037119
Fields, C., Friston, K., Glazebrook, J. F., & Levin, M. (2022). A free energy principle for generic quantum systems. Progress in Biophysics and Molecular Biology, 173, 36–59. https://doi.org/10.1016/j.pbiomolbio.2022.05.006
Vanchurin, V. (2020). The World as a Neural Network. Entropy, 22(11), 1210. https://doi.org/10.3390/e22111210
Hoffman, D. D., Prakash, C., & Prentner, R. (2023). Fusions of Consciousness. Entropy, 25(1), 129.
Okay, I’ll try to summarize your main points. Please let me know if this is right
You think «membranes» will not be able to be formalized in a consistent way, especially in a way that is consistent across different levels of modeling
“It seems easy to find counterexamples when intruding into someone’s boundaries is an ethical thing to do and obtaining from that would be highly unethical.”
Have I missed anything? I’ll respond after you confirm.
Also, would you please share any key example(s) of #2?
No, I think membranes could be formalised (Markov blankets, objective “joints” of the environment as in https://arxiv.org/abs/2303.01514, etc.; though theory-laden, I think that the “diff” between the boundaries identifiable from the perspective of different theories is usually negligible).
We, humans, intrude into each others’ boundaries, boundaries of animals, organisations, communities, etc. all the time. A surgeon intruding into the boundaries of a patient is an ethical thing to do. If AI automated the entire economy, then waited until humanity completely loses the ability to run the civilisation on their own, and then suddenly stopped any maintenance of the automated systems that support the lives of humans and sees how humans die out because they cannot support themselves would be “respecting humans’ boundaries”, but would also be an evil treacherous turn. Messing with Hitler’s boundaries (i.e., killing him) in 1940 would be an ethical action from the perspective of most systems that may care about that (individual humans, organisations, countries, communities).
I think that boundaries (including consciousness boundaries: what is the locus of animal consciousness? Just the brain or the whole body, or it even extends beyond the body? What is the locus of AI’s consciousness?) is an undeniably important concept that is usable for inferring ethical behaviour. But I don’t think a simple “winning” deontology is derivable from this concept. I’m currently preparing an article where I describe that from the AI engineering perspective, deontology, virtue ethics, and consequentialism could be seen as engineering techniques (approaches) that could help to produce and continuously infer the ethical style of behaviour. None of these “classical” approaches to normative ethics is either necessary or sufficient, but they all could help to improve the ethics in some cognitive architectures.
I see
please lmk when you post this. i’ve subscribed to your lw posts too
FWIW, I don’t think the examples given necessarily break «membranes» as a “winning” deontological theory.
If the patient has consented, there is no conflict.
(Important note: consent does not always nullify membrane violations. In this case it does, but there are many cases where it doesn’t.)
I think a way to properly understand this might be.. If Alice makes a promise to Bob, she is essentially giving Bob a piece of herself, and that changes how he plans for the future and whatnot. If she revokes that by terms not part of the original agreement, she has stolen something from Bob, and that is a violation of membranes. ?
If the AI promises to support humans under an agreement, then breaks that agreement, that is theft.
In a case like this I wonder if the theory would also need something like “minimize net boundary violations”, kind of like how some deontologies make murder okay sometimes.
But then this gets really close to utilitarianism and that’s gross imo. So I’m not sure. Maybe there’s another way to address this? Maybe I see what you mean