What I described involves some similar ideas, but I find the notion of a singleton unlikely, or at least suboptimal. It is a machine analogy for life and intelligence. A machine is a collection of parts, all working together under one common control to one common end. Living systems, by contrast, and particularly large evolving systems such as ecosystems or economies, work best, in our experience, if they do not have centralized control, but have a variety of competing agents, and some randomness.
There are a variety of proposals floating about for ways to get the benefits of competition without actually having competition. The problem with competition is that it opens the doors to many moral problems. Eliezer may believe that correct Bayesian reasoners wonât have these problems, because they will agree about everything. This ignores the fact that it is not computationally efficient, physically possible, or even semantically possible (the statement is incoherent without a definition of âagentâ) for all agents to have all available information. It also ignores the fact that randomness, and using a multitude of random starts (in competition with each other), are very useful in exploring search spaces.
I don’t think we can eliminate competition; and I don’t think we should, because most of our positive emotions were selected for by evolution only because we were in competition. Removing competition would unground our emotional preferences (eg, loving our mates and children, enjoying accomplishment), perhaps making their continued presence in our minds evolutionarily unstable, or simply superfluous (and thus necessarily to be disposed of, because the moral imperative I have most confidence that a Singleton would follow is to use energy efficiently).
The concept of a singleton is misleading, because it makes people focus on the subjectivity (or consciousness; I use these terms as synonyms) of the top level in the hierarchy. Thus, just using the word Singleton causes people to gloss over the most important moral questions to ask about a large hierarchical system. For starters, where are the locuses of consciousness in the system? Saying âjust at the topâ is probably wrong.
Imagining a future that isnât ethically repugnant requires some preliminary answers to questions about consciousness, or whatever concept we use to determine what agents need to be included in our moral calculations. One line of thought is to impose information-theoretical requirements on consciousness, such as that a conscious entity has exactly one possible symbol grounding connecting its thoughts to the outside world. You can derive lower bounds for consciousness from this supposition. Another would be to posit that the degree of consciousness is proportional to the degree of freedom, and state this with an entropy measurement relating a processesâ inputs to its possible outputs.
Having constraints such as these would allow us to begin to identify the agents in a large, interconnected system; and to evaluate our proposals.
I’d be interested in whether Eliezer thinks CEV requires a singleton. It seems to me that it does. I am more in favor of an ecosystem or balance-of-power approach that uses competition, than a totalitarian machine that excludes it.
Tim -
What I described involves some similar ideas, but I find the notion of a singleton unlikely, or at least suboptimal. It is a machine analogy for life and intelligence. A machine is a collection of parts, all working together under one common control to one common end. Living systems, by contrast, and particularly large evolving systems such as ecosystems or economies, work best, in our experience, if they do not have centralized control, but have a variety of competing agents, and some randomness.
There are a variety of proposals floating about for ways to get the benefits of competition without actually having competition. The problem with competition is that it opens the doors to many moral problems. Eliezer may believe that correct Bayesian reasoners wonât have these problems, because they will agree about everything. This ignores the fact that it is not computationally efficient, physically possible, or even semantically possible (the statement is incoherent without a definition of âagentâ) for all agents to have all available information. It also ignores the fact that randomness, and using a multitude of random starts (in competition with each other), are very useful in exploring search spaces.
I don’t think we can eliminate competition; and I don’t think we should, because most of our positive emotions were selected for by evolution only because we were in competition. Removing competition would unground our emotional preferences (eg, loving our mates and children, enjoying accomplishment), perhaps making their continued presence in our minds evolutionarily unstable, or simply superfluous (and thus necessarily to be disposed of, because the moral imperative I have most confidence that a Singleton would follow is to use energy efficiently).
The concept of a singleton is misleading, because it makes people focus on the subjectivity (or consciousness; I use these terms as synonyms) of the top level in the hierarchy. Thus, just using the word Singleton causes people to gloss over the most important moral questions to ask about a large hierarchical system. For starters, where are the locuses of consciousness in the system? Saying âjust at the topâ is probably wrong.
Imagining a future that isnât ethically repugnant requires some preliminary answers to questions about consciousness, or whatever concept we use to determine what agents need to be included in our moral calculations. One line of thought is to impose information-theoretical requirements on consciousness, such as that a conscious entity has exactly one possible symbol grounding connecting its thoughts to the outside world. You can derive lower bounds for consciousness from this supposition. Another would be to posit that the degree of consciousness is proportional to the degree of freedom, and state this with an entropy measurement relating a processesâ inputs to its possible outputs.
Having constraints such as these would allow us to begin to identify the agents in a large, interconnected system; and to evaluate our proposals.
I’d be interested in whether Eliezer thinks CEV requires a singleton. It seems to me that it does. I am more in favor of an ecosystem or balance-of-power approach that uses competition, than a totalitarian machine that excludes it.