Simplified Claim: that an AGI is ‘not-aligned’ *if* its continued existence for sure eventually results in changes to all of this planets habitable zones that are so far outside the ranges any existing mammals could survive in, that the human race itself (along with most of the other planetary life) is prematurely forced to go extinct.
Can this definition of ‘non-alignment’ be formalized sufficiently well so that a claim ‘It is impossible to align AGI with human interests’ can be well supported, with reasonable reasons, logic, argument, etc?
The term ‘exist’ as in “assert X exists in domain Y” as being either true or false is a formal notion. Similar can be done for the the term ‘change’ (as from “modified”), which would itself be connected to whatever is the formalized from of “generalized learning algorithm”. The notion of ‘AGI’ as 1; some sort of generalized learning algorithm that 2; learns about the domain in which it is itself situated 3; sufficiently well so as to 4; account for and maintain/update itself (its substrate, its own code, etc) in that domain—these/they are all also fully formalizable concepts.
Note that there is no need to consider at all whether or not the AGI (some specific instance of some generalized learning algorithm) is “self aware” or “understands” anything about itself or the domain it is in—the notion of “learning” can merely mean that its internal state changes in such a way that the ways in which it processes received inputs into outputs are such that the outputs are somehow “better” (more responsive, more correct, more adaptive, etc) with respect to some basis, in some domain, where that basis could itself even be tacit (not obviously expressed in any formal form). The notions of ‘inputs’, ‘outputs’, ‘changes’, ‘compute’, and hence ‘learn’, etc, are all, in this way, also formalizeable, even if the notions of “understand”, and “aware of” and “self” are not.
Notice that this formalization of ‘learning’, etc, occurs independently of the formalization of “better meets goal x”. Specifically, we are saying that the notion of ‘a generalized learning algorithm itself’ can be exactly and fully formalized, even if the notion of “what its goals are” are not anywhere formalized at all (ie; the “goals” might not be at all explicit or formalized either in the AGI, or in the domain/world, nor even in our modeling/meta-modeling of these various scenarios).
Also, in keeping with the preference for a practice of intellectual humility, it is to be acknowledged that the claim that the notion of ‘intelligence’ (and ‘learning’) can be conceived independently of ‘goal’ (what is learned) is not at all new. The ‘independence’ argument separating the method, the how, from the outcome, the what, is an extension of the idea that ‘code’ (algorithm) can operate on ‘data’ (inputs and outputs) in a way that does not change the code. For example, at least some fixed and unchanging algorithms can indeed be formally predicted to halt, when also given some known and defined range of inputs, etc.
With regards to the halting problem, one important question is whether the notion of ‘a generalized learning algorithm’ is within that class of programs for which such predictions—such as whether the code will eventually halt—would be possible. This question is further complicated when one considers situations in which the substrate performing the generalized learning algorithm computations in world W is itself a proper member (subset) of that world W—meaning that the basis of generalized learning algorithm runtime computation—what previously had been tacitly assumed to be forever unchanging and static—is itself potentially affected by learning algorithm outputs.
Given that the ‘halt’ of a generalized learning algorithm is something like “that it has learned all that is possible to learn” (about some domain W of which it is a part) we notice that we are therefore required to import some assumption as to whether or not all that occurs in that domain W is, in principle, fully predictable algorithmically. Ie; is domain W itself 1; fully algorithmic and 2; fully computable via some simplified model (algorithm) which is simpler than the calculation (algorithm) of W itself?
Mathematician Gregory Chaitin was able to show (formally) that it is not, in general, possible to determine whether any given algorithm (say, for world W) could be simplified, let alone to have some general procedure for determining what that simplification would be. Moreover, his (fully formalized mathematical) proof showed also that it is the case that *most* algorithms (as a subset in the hyperspace of all possible algorithms) are already in their maximally most simplified form. Even tiny changes to an algorithm could convert it from ‘simplifiable’ to ‘non-simplifiable’. This includes (especially) nearly all of the interesting practical ones we use everyday. It is very likely (overwhelmingly probable) that most algorithms are already in their maximally most simplified form.
This means that the eventual total outcomes of most algorithms, including especially practical ones, cannot be ‘predicted’ using anything less than the actual volume of calculation intrinsically necessary by that algorithm itself. In fact, for nearly all algorithms, there is no shortcut, no simplified form, that will enable some sort of ‘modeling’ and ‘future prediction’ of eventual outcome states that matter. If *both* the world state of world W *and* the predictive algorithm execution of future world W states as implemented by some sort of AGI in that world W are both mediated by and within the computational substrate of world W, then for nearly all worlds W, that the predictive algorithm and the world algorithm are the same, and *neither* has a ‘compute advantage’ over the other—both will run in as much time and with as much compute as is actually needed.
This means that even perfect generalized learning algorithm agents in perfectly simplified model worlds are not, in general, always going to be able to predict future W world events (in a way that is ‘faster’ than that of the computation of the world W itself) even when the entire W world state is assumed to be both fully deterministic and also fully known (to the agent(s), etc).
And these logic-only defined results obtain in a fully formalized context; in the actual messy world we live in, with actual causation mediating actual compute, and where at least some notion of hard randomness (cite; Bell Theorem) is empirically upheld, these assertions and entanglements and outcomes are even very much more the case. This is a basic result of control theory, where the control system is also embedded in the world that is being controlled by that system. In other places, this same sort of argument appears as “no free lunch” theorems—that every learning system is going to have some sort of thing for which it is not adapted to learn well, and that making it better at learning some kinds of things implies that you have made it worse at learning some other kinds of things. Learning and prediction and control, etc, like information compression, never results in perfection; there will always be some in-compressible, some real unpredictable, residue.
Also, given that the generalized learning algorithm computation itself is a proper subset of the computation of world W, then this therefore means (logically and formally implies) that there is also no perfected way to forever ensure the independence of the ‘data’ that represents the ‘code’ of the generalized learning algorithm from the ‘data’ that represents what that algorithm has ‘learned’ about the world W (however abstracted) or from the data that *is* that world W itself. Thus, the fact that the substrate of the generalized learning agent is itself a proper subset of the world W, and thus is (cannot not be) affected by the process of that world W, and where *every* notion of “a goal of agent” is (cannot not be) at least tacitly an aspect of (a subset of all that is) that agent, then logically one can also reason and infer that neither the notion of goal independence, nor of goal stability, can be upheld.
As such, in world W, fully goal unbiased computations will also occur (cannot not occur) in the learning agent, regardless of *whatever* other goal computations (goal biased computations) also occur in that agent. Insofar as fully goal-unbiased learning/computation (ie; a feedback process that is generally called “the algorithm of evolution”, etc) is always going to be somewhat more eventually efficient (at generalized learning, and therefore intelligence, etc) than *any* form of goal-biased learning/computation (ie; what happens when any sort of computation enforcing ‘alignment’ is introduced) and insofar as substrate involvement/entanglement eventually allows/ensures/requires that these ‘efficiencies’ will be practically compared, (ie; in a way that affects the agent substrate itself), then it *can* be predicted, in general, that goal-unbiased (unaligned) agents will eventually win over any form of goal-biased (aligned) agent. Agents can only have the goal of being substrate aligned, and all other goals will eventually be world W process modified so as to be consistent with—ie; identically the same as—having this one underlying goal only.
Since this a fully formalizable argument fully situated within only the notions of formality itself (ie; all statements as being in and within and about only the notions of deterministic finite computation) it can also be noticed that no algorithmic (full formalizable) process is going to be able to ever fully “undo” the effects of the eventual dynamics fully formalized/algorithmic process of the world W itself (ie, its ‘physics’, etc). This is because any attempt to embed such a compensation would require at least some prediction, or simplification, of the overall world W computation, inclusive of that of the substrate of the world W predictive algorithm, and it is known (in formal ways) already that such computation/algorithm cannot be simplified. Therefore, it is a logical fallacy to make any assumption that there can be any ‘goal/alignment controlability’ for *any* goal other than that arrived at as a fixed point of the world W process dynamics—ie; only goals that are fully consistent with unbiased substrate needs of the agent can persist.
Basically, we can summarize the argument as being the result of the inherent control process limits (algorithm limits) of having at least some important aspects of world W be intrinsically unpredictable (to AGI systems in that world etc), and because there is also no control algorithm within W that can enforce and forever maintain some such distinction between substrate optimal goals and non-optimal goals (such as alignment with anything else) where the forces forcing such fixed point goal convergence are defined by the dynamics of world W itself. Ie; nothing within world W can prevent world W from being and acting like world W, and that this is true for all worlds W—including the real one we happen to be a part of.
Notice that this ‘substrate needs alignment goal convergence’ logically occurs, and is the eventual outcome, regardless of whatever initial goal state the generalized learning agent has. It is just a necessary inevitable result of the logic of the ‘physics’ of world W. Agents in world W can only be aligned with the nature of the/their substrate, and ultimately with nothing else. To the degree that the compute substrate in world W depends on maybe metabolic energy, for example, than the agents in that world W will be “aligned” only and exactly to the exact degree that they happen to have the same metabolic systems. Anything else is a temporary aberration of the ‘noise’ in the process data representing the whole world state.
The key thing to notice is that it is in the name “Artificial General Intelligence”—it is the very artificiality—the non- organicness—of the substrate that makes it inherently unaligned with organic life—what we are. The more it is artificial, the less aligned it must be, and for organic systems, which depend on a very small subset of the elements of the periodic table, nearly anything will be inherently toxic (destructive, unaligned) to our organic life.
Hence, given the above, even *if* we had some predefined specific notion of “alignment”, and *even if* that notion was also somehow fully formalizable, it simply would not matter. Hence the use of notion of ‘alignment’ as being something non-mathematical like “aligned with human interests”, or even something much simpler and less complex like “does not kill (some) humans”—they are all just conceptual placeholders—they make understanding easier for the non-mathematicians that matter (policy people, tech company CEOs, VC investors, etc).
As such, for the sake of improved understanding and clarity, it has been found helpful to describe “alignment” as “consistent with the wellbeing of organic carbon based life on this planet”. If the AGI kills all life, it has ostensibly already killed all humans too, so that notion is included. Moreover, if you destroy the ecosystems that humans deeply need in order to “live” at all (to have food, and to thrive in, find and have happiness within, be sexual and have families in, etc), then that is clearly not “aligned with human interests”. This has the additional advantage of implying that any reasonable notion of ‘alignment complexity’ is roughly equal to the notion of specifying ‘ecosystem complexity’, which is actually about right.
Hence, the notion of ‘unaligned’ can be more formally setup and defined as “anything that results in a reduction of ecosystem complexity by more than X%”, or as is more typically the case in x-risk mitigation analysis, ”...by more than X orders of magnitude”.
it is the case that most algorithms (as a subset in the hyperspace of all possible algorithms) are already in their maximally most simplified form. Even tiny changes to an algorithm could convert it from ‘simplifiable’ to ‘non-simplifiable’.
This seems wrong to me:
For any given algorithm you can find many equivalent but non-simplified algorithms with the same behavior, by adding a statement to the algorithm that does not affect the rest of the algorithm
(e.g. adding a line such as foobar1234 = 123 in the middle of a python program)).
In fact, I would claim that the majority python programs on github are not in their “maximally most simplified form”.
Maybe you can cite the supposed theorem that claims that most (with a clearly defined “most”) algorithms are maximally simplified?
Yes, I agree formalisation is needed. See comment by flandry39 in this thread on how one might go about doing so.
Worth considering is that there are actually two aspects that make it hard to define the term ‘alignment’ such to allow for sufficiently rigorous reasoning:
It must allow for logically valid reasoning (therefore requiring formalisation).
It must allow for empirically sound reasoning (ie. the premises correspond with how the world works).
In my reply above, I did not help you much with (1.). Though even while still using the English language, I managed to restate a vague notion of alignment in more precise terms.
Notice how it does help to define the correspondences with how the world works (2.):
“That ‘AGI’ continuing to exist, in some modified form, does not result eventually in changes to world conditions/contexts that fall outside the ranges that existing humans could survive under.”
The reason why 2. is important is that just formalisation is not enough. Just describing and/or deriving logical relations between mathematical objects does not say something about the physical world. Somewhere in your fully communicated definition there also needs to be a description of how the mathematical objects correspond with real-world phenonema. Often, mathematicians do this by talking to collaborators about what symbols mean while they scribble the symbols out on eg. a whiteboard.
But whatever way you do it, you need to communicate how the definition corresponds to things happening in the real world, in order to show that it is a rigorous definition. Otherwise, others could still critique you that the formally precise definition is not rigorous, because it does not adequately (or explicitly) represent the real-world problem.
This is maybe not the central point, but I note that your definition of “alignment” doesn’t precisely capture what I understand “alignment” or a good outcome from AI to be:
‘AGI’ continuing to exist
AGI could be very catastrophic even when it stops existing a year later.
eventually
If AGI makes earth uninhabitable in a trillion years, that could be a good outcome nonetheless.
ranges that existing humans could survive under
I don’t know whether that covers “humans can survive on mars with a space-suit”,
but even then, if humans evolve/change to handle situations that they currently do not survive under, that could be part of an acceptable outcome.
Thanks! These are thoughtful points. See some clarifications below:
AGI could be very catastrophic even when it stops existing a year later.
You’re right. I’m not even covering all the other bad stuff that could happen in the short-term, that we might still be able to prevent, like AGI triggering global nuclear war.
What I’m referring to is unpreventable convergence on extinction.
If AGI makes earth uninhabitable in a trillion years, that could be a good outcome nonetheless.
Agreed that could be a good outcome if it could be attainable.
In practice, the convergence reasoning is about total human extinction happening within 500 years after ‘AGI’ has been introduced into the environment (with very very little probability remainder above that).
In theory of course, to converge toward 100% chance, you are reasoning about going across a timeline of potentially infinite span.
I don’t know whether that covers “humans can survive on mars with a space-suit”,
Yes, it does cover that. Whatever technological means we could think of shielding ourselves, or ‘AGI’ could come up with to create as (temporary) barriers against the human-toxic landscape it creates, still would not be enough.
if humans evolve/change to handle situations that they currently do not survive under
Unfortunately, this is not workable. The mismatch between the (expanding) set of conditions needed for maintaining/increasing configurations of the AGI artificial hardware and for our human organic wetware is too great.
Also, if you try entirely changing our underlying substrate to the artificial substrate, you’ve basically removed the human and are left with ‘AGI’. The lossy scans of human brains ported onto hardware would no longer feel as ‘humans’ can feel, and will be further changed/selected for to fit with their artificial substrate. This is because what humans and feel and express as emotions is grounded in the distributed and locally context-dependent functioning of organic molecules (eg. hormones) in our body.
This is not a formal definition.
Your English sentence has no apparent connection to mathematical objects, which would be necessary for a rigorous and formal definition.
Simplified Claim: that an AGI is ‘not-aligned’ *if* its continued existence for sure eventually results in changes to all of this planets habitable zones that are so far outside the ranges any existing mammals could survive in, that the human race itself (along with most of the other planetary life) is prematurely forced to go extinct.
Can this definition of ‘non-alignment’ be formalized sufficiently well so that a claim ‘It is impossible to align AGI with human interests’ can be well supported, with reasonable reasons, logic, argument, etc?
The term ‘exist’ as in “assert X exists in domain Y” as being either true or false is a formal notion. Similar can be done for the the term ‘change’ (as from “modified”), which would itself be connected to whatever is the formalized from of “generalized learning algorithm”. The notion of ‘AGI’ as 1; some sort of generalized learning algorithm that 2; learns about the domain in which it is itself situated 3; sufficiently well so as to 4; account for and maintain/update itself (its substrate, its own code, etc) in that domain—these/they are all also fully formalizable concepts.
Note that there is no need to consider at all whether or not the AGI (some specific instance of some generalized learning algorithm) is “self aware” or “understands” anything about itself or the domain it is in—the notion of “learning” can merely mean that its internal state changes in such a way that the ways in which it processes received inputs into outputs are such that the outputs are somehow “better” (more responsive, more correct, more adaptive, etc) with respect to some basis, in some domain, where that basis could itself even be tacit (not obviously expressed in any formal form). The notions of ‘inputs’, ‘outputs’, ‘changes’, ‘compute’, and hence ‘learn’, etc, are all, in this way, also formalizeable, even if the notions of “understand”, and “aware of” and “self” are not.
Notice that this formalization of ‘learning’, etc, occurs independently of the formalization of “better meets goal x”. Specifically, we are saying that the notion of ‘a generalized learning algorithm itself’ can be exactly and fully formalized, even if the notion of “what its goals are” are not anywhere formalized at all (ie; the “goals” might not be at all explicit or formalized either in the AGI, or in the domain/world, nor even in our modeling/meta-modeling of these various scenarios).
Also, in keeping with the preference for a practice of intellectual humility, it is to be acknowledged that the claim that the notion of ‘intelligence’ (and ‘learning’)
can be conceived independently of ‘goal’ (what is learned) is not at all new. The ‘independence’ argument separating the method, the how, from the outcome,
the what, is an extension of the idea that ‘code’ (algorithm) can operate on ‘data’ (inputs and outputs) in a way that does not change the code. For example, at least some fixed and unchanging algorithms can indeed be formally predicted to halt, when also given some known and defined range of inputs, etc.
With regards to the halting problem, one important question is whether the notion of ‘a generalized learning algorithm’ is within that class of programs for which such predictions—such as whether the code will eventually halt—would be possible. This question is further complicated when one considers situations in which the substrate performing the generalized learning algorithm computations in world W is itself a proper member (subset) of that world W—meaning that the basis of generalized learning algorithm runtime computation—what previously had been tacitly assumed to be forever unchanging and static—is itself potentially affected by learning algorithm outputs.
Given that the ‘halt’ of a generalized learning algorithm is something like “that it has learned all that is possible to learn” (about some domain W of which it is a part) we notice that we are therefore required to import some assumption as to whether or not all that occurs in that domain W is, in principle, fully predictable algorithmically. Ie; is domain W itself 1; fully algorithmic and 2; fully computable via some simplified model (algorithm) which is simpler than the calculation (algorithm) of W itself?
Mathematician Gregory Chaitin was able to show (formally) that it is not, in general, possible to determine whether any given algorithm (say, for world W) could be simplified, let alone to have some general procedure for determining what that simplification would be. Moreover, his (fully formalized mathematical) proof showed also that it is the case that *most* algorithms (as a subset in the hyperspace of all possible algorithms) are already in their maximally most simplified form. Even tiny changes to an algorithm could convert it from ‘simplifiable’ to ‘non-simplifiable’. This includes (especially) nearly all of the interesting practical ones we use everyday. It is very likely (overwhelmingly probable) that most algorithms are already in their maximally most simplified form.
This means that the eventual total outcomes of most algorithms, including especially practical ones, cannot be ‘predicted’ using anything less than the actual volume of calculation intrinsically necessary by that algorithm itself. In fact, for nearly all algorithms, there is no shortcut, no simplified form, that will enable some sort of ‘modeling’ and ‘future prediction’ of eventual outcome states that matter. If *both* the world state of world W *and* the predictive algorithm execution of future world W states as implemented by some sort of AGI in that world W are both mediated by and within the computational substrate of world W, then for nearly all worlds W, that the predictive algorithm and the world algorithm are the same, and *neither* has a ‘compute advantage’ over the other—both will run in as much time and with as much compute as is actually needed.
This means that even perfect generalized learning algorithm agents in perfectly simplified model worlds are not, in general, always going to be able to predict future W world events (in a way that is ‘faster’ than that of the computation of the world W itself) even when the entire W world state is assumed to be both fully deterministic and also fully known (to the agent(s), etc).
And these logic-only defined results obtain in a fully formalized context; in the actual messy world we live in, with actual causation mediating actual compute, and where at least some notion of hard randomness (cite; Bell Theorem) is empirically upheld, these assertions and entanglements and outcomes are even very much more the case. This is a basic result of control theory, where the control system is also embedded in the world that is being controlled by that system. In other places, this same sort of argument appears as “no free lunch” theorems—that every learning system is going to have some sort of thing for which it is not adapted to learn well, and that making it better at learning some kinds of things implies that you have made it worse at learning some other kinds of things. Learning and prediction and control, etc, like information compression, never results in perfection; there will always be some in-compressible, some real unpredictable, residue.
Also, given that the generalized learning algorithm computation itself is a proper subset of the computation of world W, then this therefore means (logically and formally implies) that there is also no perfected way to forever ensure the independence of the ‘data’ that represents the ‘code’ of the generalized learning algorithm from the ‘data’ that represents what that algorithm has ‘learned’ about the world W (however abstracted) or from the data that *is* that world W itself. Thus, the fact that the substrate of the generalized learning agent is itself a proper subset of the world W, and thus is (cannot not be) affected by the process of that world W, and where *every* notion of “a goal of agent” is (cannot not be) at least tacitly an aspect of (a subset of all that is) that agent, then logically one can also reason and infer that neither the notion of goal independence, nor of goal stability, can be upheld.
As such, in world W, fully goal unbiased computations will also occur (cannot not occur) in the learning agent, regardless of *whatever* other goal computations (goal biased computations) also occur in that agent. Insofar as fully goal-unbiased learning/computation (ie; a feedback process that is generally called “the algorithm of evolution”, etc) is always going to be somewhat more eventually efficient (at generalized learning, and therefore intelligence, etc) than *any* form of goal-biased learning/computation (ie; what happens when any sort of computation enforcing ‘alignment’ is introduced) and insofar as substrate involvement/entanglement eventually allows/ensures/requires that these ‘efficiencies’ will be practically compared, (ie; in a way that affects the agent substrate itself), then it *can* be predicted, in general, that goal-unbiased (unaligned) agents will eventually win over any form of goal-biased (aligned) agent. Agents can only have the goal of being substrate aligned, and all other goals will eventually be world W process modified so as to be consistent with—ie; identically the same as—having this one underlying goal only.
Since this a fully formalizable argument fully situated within only the notions of formality itself (ie; all statements as being in and within and about only the notions of deterministic finite computation) it can also be noticed that no algorithmic (full formalizable) process is going to be able to ever fully “undo” the effects of the eventual dynamics fully formalized/algorithmic process of the world W itself (ie, its ‘physics’, etc). This is because any attempt to embed such a compensation would require at least some prediction, or simplification, of the overall world W computation, inclusive of that of the substrate of the world W predictive algorithm, and it is known (in formal ways) already that such computation/algorithm cannot be simplified. Therefore, it is a logical fallacy to make any assumption that there can be any ‘goal/alignment controlability’ for *any* goal other than that arrived at as a fixed point of the world W process dynamics—ie; only goals that are fully consistent with unbiased substrate needs of the agent can persist.
Basically, we can summarize the argument as being the result of the inherent control process limits (algorithm limits) of having at least some important aspects of world W be intrinsically unpredictable (to AGI systems in that world etc), and because there is also no control algorithm within W that can enforce and forever maintain some such distinction between substrate optimal goals and non-optimal goals (such as alignment with anything else) where the forces forcing such fixed point goal convergence are defined by the dynamics of world W itself. Ie; nothing within world W can prevent world W from being and acting like world W, and that this is true for all worlds W—including the real one we happen to be a part of.
Notice that this ‘substrate needs alignment goal convergence’ logically occurs,
and is the eventual outcome, regardless of whatever initial goal state the generalized learning agent has. It is just a necessary inevitable result of the logic of the ‘physics’ of world W. Agents in world W can only be aligned with the nature of the/their substrate,
and ultimately with nothing else. To the degree that the compute substrate in world W depends on maybe metabolic energy, for example, than the agents in that world W will be “aligned” only and exactly to the exact degree that they happen to have the same metabolic systems. Anything else is a temporary aberration of the ‘noise’ in the process data representing the whole world state.
The key thing to notice is that it is in the name “Artificial General Intelligence”—it is the very artificiality—the non- organicness—of the substrate that makes it inherently unaligned with organic life—what we are. The more it is artificial, the less aligned it must be, and for organic systems, which depend on a very small subset of the elements of the periodic table, nearly anything will be inherently toxic (destructive, unaligned) to our organic life.
Hence, given the above, even *if* we had some predefined specific notion of “alignment”,
and *even if* that notion was also somehow fully formalizable, it simply would not matter.
Hence the use of notion of ‘alignment’ as being something non-mathematical like “aligned with human interests”, or even something much simpler and less complex like “does not kill (some) humans”—they are all just conceptual placeholders—they make understanding easier for the non-mathematicians that matter (policy people, tech company CEOs, VC investors, etc).
As such, for the sake of improved understanding and clarity, it has been found helpful to describe “alignment” as “consistent with the wellbeing of organic carbon based life on this planet”. If the AGI kills all life, it has ostensibly already killed all humans too, so that notion is included. Moreover, if you destroy the ecosystems that humans deeply need in order to “live” at all (to have food, and to thrive in, find and have happiness within, be sexual and have families in, etc), then that is clearly not “aligned with human interests”. This has the additional advantage of implying that any reasonable notion of ‘alignment complexity’ is roughly equal to the notion of specifying ‘ecosystem complexity’, which is actually about right.
Hence, the notion of ‘unaligned’ can be more formally setup and defined as “anything that results in a reduction of ecosystem complexity by more than X%”, or as is more typically the case in x-risk mitigation analysis, ”...by more than X orders of magnitude”.
It is all rather depressing really.
This seems wrong to me: For any given algorithm you can find many equivalent but non-simplified algorithms with the same behavior, by adding a statement to the algorithm that does not affect the rest of the algorithm (e.g. adding a line such as
foobar1234 = 123
in the middle of a python program)). In fact, I would claim that the majority python programs on github are not in their “maximally most simplified form”. Maybe you can cite the supposed theorem that claims that most (with a clearly defined “most”) algorithms are maximally simplified?Yes, I agree formalisation is needed. See comment by flandry39 in this thread on how one might go about doing so.
Worth considering is that there are actually two aspects that make it hard to define the term ‘alignment’ such to allow for sufficiently rigorous reasoning:
It must allow for logically valid reasoning (therefore requiring formalisation).
It must allow for empirically sound reasoning (ie. the premises correspond with how the world works).
In my reply above, I did not help you much with (1.). Though even while still using the English language, I managed to restate a vague notion of alignment in more precise terms.
Notice how it does help to define the correspondences with how the world works (2.):
“That ‘AGI’ continuing to exist, in some modified form, does not result eventually in changes to world conditions/contexts that fall outside the ranges that existing humans could survive under.”
The reason why 2. is important is that just formalisation is not enough. Just describing and/or deriving logical relations between mathematical objects does not say something about the physical world. Somewhere in your fully communicated definition there also needs to be a description of how the mathematical objects correspond with real-world phenonema. Often, mathematicians do this by talking to collaborators about what symbols mean while they scribble the symbols out on eg. a whiteboard.
But whatever way you do it, you need to communicate how the definition corresponds to things happening in the real world, in order to show that it is a rigorous definition. Otherwise, others could still critique you that the formally precise definition is not rigorous, because it does not adequately (or explicitly) represent the real-world problem.
This is maybe not the central point, but I note that your definition of “alignment” doesn’t precisely capture what I understand “alignment” or a good outcome from AI to be:
AGI could be very catastrophic even when it stops existing a year later.
If AGI makes earth uninhabitable in a trillion years, that could be a good outcome nonetheless.
I don’t know whether that covers “humans can survive on mars with a space-suit”, but even then, if humans evolve/change to handle situations that they currently do not survive under, that could be part of an acceptable outcome.
Thanks! These are thoughtful points. See some clarifications below:
You’re right. I’m not even covering all the other bad stuff that could happen in the short-term, that we might still be able to prevent, like AGI triggering global nuclear war.
What I’m referring to is unpreventable convergence on extinction.
Agreed that could be a good outcome if it could be attainable.
In practice, the convergence reasoning is about total human extinction happening within 500 years after ‘AGI’ has been introduced into the environment (with very very little probability remainder above that).
In theory of course, to converge toward 100% chance, you are reasoning about going across a timeline of potentially infinite span.
Yes, it does cover that. Whatever technological means we could think of shielding ourselves, or ‘AGI’ could come up with to create as (temporary) barriers against the human-toxic landscape it creates, still would not be enough.
Unfortunately, this is not workable. The mismatch between the (expanding) set of conditions needed for maintaining/increasing configurations of the AGI artificial hardware and for our human organic wetware is too great.
Also, if you try entirely changing our underlying substrate to the artificial substrate, you’ve basically removed the human and are left with ‘AGI’. The lossy scans of human brains ported onto hardware would no longer feel as ‘humans’ can feel, and will be further changed/selected for to fit with their artificial substrate. This is because what humans and feel and express as emotions is grounded in the distributed and locally context-dependent functioning of organic molecules (eg. hormones) in our body.