Thanks for the reference to CEV. That seems to answer the “Friendly to whom?” question with “some collective notion of humanity”.
Humans have different visions of the future—and you can’t please all the people—so issues arise regarding whether you please the luddites or the technophiles, the capitalists or the communists, and so on—i.e. whose views do you give weight to? and how do you resolve differences of opinion?
Also: what is “humanity”? The answer to this question seems obvious today, but in a future where we have intelligent machines, our strong tendencey to anthropomorphise means that we may well regard them as being people too. If so, do they then get a say in the future?
If not, there seems to be a danger that placing too great a value on humans (as in homo sapiens sapiens) could cause the evolutionary progress to get “stuck” in an undesirably-backwards state:
Humans are primitive orgainsms—close relatives to mice. In the natural order of things, they seem destined to go up against the wall pretty quickly. Essentially, organisms cobbled together by random mutations won’t be able to compete in a future consisting of engineered agents. Placing a large a value on biological humans may offer some possibility of deliberately hindering development—by valuing the old over the new. However, the problem is that “the old” is essentially a load of low-tech rubbish—and there are real dangers to placing “excessive” value on it. Most obviously, attempts to keep biological humans at the core of civilisation look as though they would probably cripple our civilisation’s spaceworthyness, and severely limit its rate of expansion into the galaxy—thus increasing the chances of its ultimate obliteration at the hands of an asteroid or an alien civilisation. I see this as being a potentially-problematical stance—this type of thinking runs a risk of sterilising our civilisation.
Thanks for the reference to CEV. That seems to answer the “Friendly to whom?” question with “some collective notion of humanity”.
Humans have different visions of the future—and you can’t please all the people—so issues arise regarding whether you please the luddites or the technophiles, the capitalists or the communists, and so on—i.e. whose views do you give weight to? and how do you resolve differences of opinion?
Also: what is “humanity”? The answer to this question seems obvious today, but in a future where we have intelligent machines, our strong tendencey to anthropomorphise means that we may well regard them as being people too. If so, do they then get a say in the future?
If not, there seems to be a danger that placing too great a value on humans (as in homo sapiens sapiens) could cause the evolutionary progress to get “stuck” in an undesirably-backwards state:
Humans are primitive orgainsms—close relatives to mice. In the natural order of things, they seem destined to go up against the wall pretty quickly. Essentially, organisms cobbled together by random mutations won’t be able to compete in a future consisting of engineered agents. Placing a large a value on biological humans may offer some possibility of deliberately hindering development—by valuing the old over the new. However, the problem is that “the old” is essentially a load of low-tech rubbish—and there are real dangers to placing “excessive” value on it. Most obviously, attempts to keep biological humans at the core of civilisation look as though they would probably cripple our civilisation’s spaceworthyness, and severely limit its rate of expansion into the galaxy—thus increasing the chances of its ultimate obliteration at the hands of an asteroid or an alien civilisation. I see this as being a potentially-problematical stance—this type of thinking runs a risk of sterilising our civilisation.