Unfortunately, it seems to me that moral anti-realism and axiological anti-realism place limits on our ability to “optimize” the universe.
To put the argument in simple terms:
Axiological/Moral anti-realism states that there are no categorically good states of the universe. On this we agree. The goodness of states of the universe is contingent upon the desires and values of those who ask the question; in this case us.
Human minds can only store a finite amount of information in our preferences. Humans who have spent more time developing their character beyond the evolutionarily programmed desires [food, sex, friendship, etc] will fare slightly better than those who haven’t, i.e. their preferences will be more complicated. But probably not by very much, information theoretically speaking. The amount of information your preferences can absorb by reading books, by having life experiences, etc is probably small compared to the information implicit in just being human.
The size of the mutually agreed preferences of any group of humans will typically be smaller than the preferences of any one human. Hence it is not surprising that in the recent article on “Failed Utopia 4-2” there was a lot of disagreement regarding the goodness of this world.
The world that we currently live in here in the US/UK/EU fails to fulfill a lot of the base preferences that are common to all humans, with notable examples being the dissatisfaction with the opposite sex, boring jobs, depression, aging, etc, etc...
If one optimized over these unfulfilled preferences, one would get something that resembled—for most people—a low grade utopia that looked approximately like Banks’ Culture. This low grade utopia would probably only be a small amount of information away from the world we see today. Not that it isn’t worth doing, of course!
This explains a lot of things. For example, the change of name of the WTA from “transhumanist” to “humanity plus”. Humanity plus is code for “low grade utopia for all”. “Transhumanist” is code for futures that various oddball individuals envisage in which they (somehow) optimize themselves way beyond the usual human preference set. These two futures are eminently compatible—we can have them both, but most people show no interest in the second set of possibilities. It will be interesting to think about the continuum between these two goals. It’s also interesting to wonder whether the goals of “radical” transhumanists might be a little self-contradictory. With a limited human brain, you can (as a matter of physical fact) only entertain thoughts that constrain the future to a limited degree. Even with all technological obstacles out of the way, our imaginations might place a hard limit on how good a future we can try to build for ourselves. Anyone who tries to exceed this limit will end up (somehow) absorbing noise from their environment and incorporating it into their preferences. Not that I have anything against this—it is how we got our preferences in the first place—though it is not a strong motivator for me to fantasize about spending eternity fulfilling preferences that I don’t have yet and which I will generate at random at some point in the future when I realize that my extant preferences have “run out of juice”.
This, I fear, is a serious torpedo in the side of the transhumanist ideal. I eagerly await somebody proving me wrong here...
Unfortunately, it seems to me that moral anti-realism and axiological anti-realism place limits on our ability to “optimize” the universe.
To put the argument in simple terms:
Axiological/Moral anti-realism states that there are no categorically good states of the universe. On this we agree. The goodness of states of the universe is contingent upon the desires and values of those who ask the question; in this case us.
Human minds can only store a finite amount of information in our preferences. Humans who have spent more time developing their character beyond the evolutionarily programmed desires [food, sex, friendship, etc] will fare slightly better than those who haven’t, i.e. their preferences will be more complicated. But probably not by very much, information theoretically speaking. The amount of information your preferences can absorb by reading books, by having life experiences, etc is probably small compared to the information implicit in just being human.
The size of the mutually agreed preferences of any group of humans will typically be smaller than the preferences of any one human. Hence it is not surprising that in the recent article on “Failed Utopia 4-2” there was a lot of disagreement regarding the goodness of this world.
The world that we currently live in here in the US/UK/EU fails to fulfill a lot of the base preferences that are common to all humans, with notable examples being the dissatisfaction with the opposite sex, boring jobs, depression, aging, etc, etc...
If one optimized over these unfulfilled preferences, one would get something that resembled—for most people—a low grade utopia that looked approximately like Banks’ Culture. This low grade utopia would probably only be a small amount of information away from the world we see today. Not that it isn’t worth doing, of course!
This explains a lot of things. For example, the change of name of the WTA from “transhumanist” to “humanity plus”. Humanity plus is code for “low grade utopia for all”. “Transhumanist” is code for futures that various oddball individuals envisage in which they (somehow) optimize themselves way beyond the usual human preference set. These two futures are eminently compatible—we can have them both, but most people show no interest in the second set of possibilities. It will be interesting to think about the continuum between these two goals. It’s also interesting to wonder whether the goals of “radical” transhumanists might be a little self-contradictory. With a limited human brain, you can (as a matter of physical fact) only entertain thoughts that constrain the future to a limited degree. Even with all technological obstacles out of the way, our imaginations might place a hard limit on how good a future we can try to build for ourselves. Anyone who tries to exceed this limit will end up (somehow) absorbing noise from their environment and incorporating it into their preferences. Not that I have anything against this—it is how we got our preferences in the first place—though it is not a strong motivator for me to fantasize about spending eternity fulfilling preferences that I don’t have yet and which I will generate at random at some point in the future when I realize that my extant preferences have “run out of juice”.
This, I fear, is a serious torpedo in the side of the transhumanist ideal. I eagerly await somebody proving me wrong here...