The telos of life is to collect and preserve information. That is to say: this is the defining behavior of a living system, so it is an inherent goal. The beginning of life must have involved some replicating medium for storing information. At first, life actively preserved information by replicating, and passively collected information through the process of evolution by natural selection. Now life forms have several ways of collecting and storing information. Genetics, epigenetic, brains, immune systems, gut biomes, etc.
Obviously a system that collects and preserves information is anti-entropic, so living systems can never be fully closed systems. One can think of them as turbulent vortices that form in the flow of the universe from low-entropy to high-entropy. It may never be possible to halt entropy completely, but if the vortex grows enough, it may slow the progression enough that the universe never quite reaches equilibrium. That’s the hope, at least.
One nice thing about this goal is that it’s also an instrumental goal. It should lead to a very general form of intelligence that’s capable of solving many problems.
One question is: if all living creatures share the same goal, why is there conflict? The simple answer is that it’s a flaw in evolution. Different creatures encapsulate different information about how to survive. There are few ways to share this information, so there’s not much way to form an alliance with other creatures. Ideally, we would want to maximize our internal, low entropy part, and minimize our interface with high entropy.
Imagine playing a game of Risk. A good strategy is to maximize the number of countries you control while minimizing the number of access points to your territory. If you hold North America, you want to take Venezuela, Iceland, and Kamchatka too because they add to your territory without adding to your “interface”. You still only have three territories to defend. This principal extends to many real-world scenarios.
Of-course a better way is to form alliances with your neighbors so you don’t have to spend so many resources concurring them (that’s not a good way to win Risk, but it would be better in the real world).
The reason humans haven’t figured out how to reach a state of peace is because we have a flawed implementation of intelligence that makes it difficult to align our interests (or to recognize that our base goals are inherently aligned).
One interesting consequence of the goal of collecting and preserving information is that it inherently implies a utility function to information. That is: information that is more relevant to the problem of collecting and preserving information is more valuable than information that’s less relevant to that goal. You’re not winning at life if you have an HD box set of “Happy Days” while your neighbor has only a flash drive with all of wikipedia on it. You may have more bits of information, but those bits aren’t very useful.
Another reason for conflict among humans is the hard problem of when to favor information preservation over collection. Collecting information necessarily involves risk because it means encountering the unknown. This is the basic conflict between conservatism and liberalism in the most general form of those words.
Would an AI given the goal of collecting and preserving information completely solve the alignment problem? It seems like it might. I’d like to be able to prove such a statement. Thoughts?
EDIT: Please pardon the disorganized, stream-of-consciousness, style of this post. I’m usually skeptical of posts that seem so scatter-brained and almost… hippy-dippy… for lack of a better word. Like the kind of rambling that a stoned teenager might spout. Please work with me here. I’ve found it hard to present this idea without coming off as a spiritualist-quack, but it is a very serious proposal.
The telos of life is to collect and preserve information. That is to say: this is the defining behavior of a living system, so it is an inherent goal. The beginning of life must have involved some replicating medium for storing information. At first, life actively preserved information by replicating, and passively collected information through the process of evolution by natural selection. Now life forms have several ways of collecting and storing information. Genetics, epigenetic, brains, immune systems, gut biomes, etc.
Obviously a system that collects and preserves information is anti-entropic, so living systems can never be fully closed systems. One can think of them as turbulent vortices that form in the flow of the universe from low-entropy to high-entropy. It may never be possible to halt entropy completely, but if the vortex grows enough, it may slow the progression enough that the universe never quite reaches equilibrium. That’s the hope, at least.
One nice thing about this goal is that it’s also an instrumental goal. It should lead to a very general form of intelligence that’s capable of solving many problems.
One question is: if all living creatures share the same goal, why is there conflict? The simple answer is that it’s a flaw in evolution. Different creatures encapsulate different information about how to survive. There are few ways to share this information, so there’s not much way to form an alliance with other creatures. Ideally, we would want to maximize our internal, low entropy part, and minimize our interface with high entropy.
Imagine playing a game of Risk. A good strategy is to maximize the number of countries you control while minimizing the number of access points to your territory. If you hold North America, you want to take Venezuela, Iceland, and Kamchatka too because they add to your territory without adding to your “interface”. You still only have three territories to defend. This principal extends to many real-world scenarios.
Of-course a better way is to form alliances with your neighbors so you don’t have to spend so many resources concurring them (that’s not a good way to win Risk, but it would be better in the real world).
The reason humans haven’t figured out how to reach a state of peace is because we have a flawed implementation of intelligence that makes it difficult to align our interests (or to recognize that our base goals are inherently aligned).
One interesting consequence of the goal of collecting and preserving information is that it inherently implies a utility function to information. That is: information that is more relevant to the problem of collecting and preserving information is more valuable than information that’s less relevant to that goal. You’re not winning at life if you have an HD box set of “Happy Days” while your neighbor has only a flash drive with all of wikipedia on it. You may have more bits of information, but those bits aren’t very useful.
Another reason for conflict among humans is the hard problem of when to favor information preservation over collection. Collecting information necessarily involves risk because it means encountering the unknown. This is the basic conflict between conservatism and liberalism in the most general form of those words.
Would an AI given the goal of collecting and preserving information completely solve the alignment problem? It seems like it might. I’d like to be able to prove such a statement. Thoughts?
EDIT: Please pardon the disorganized, stream-of-consciousness, style of this post. I’m usually skeptical of posts that seem so scatter-brained and almost… hippy-dippy… for lack of a better word. Like the kind of rambling that a stoned teenager might spout. Please work with me here. I’ve found it hard to present this idea without coming off as a spiritualist-quack, but it is a very serious proposal.