That’s an interesting and helpful summary comment, Carl. I’ll see if I can make some helpful responses to the specific theories listed above—in this comment’s children:
Regarding Robin Hanson’s proposed hypercompetitive Malthusian world:
Hanson imagines lots of small ems—on the grounds that coordination is hard. I am much more inclined to expect large scale structure and governance—in which case the level of competition between the agents can be configured to be whatever the government decrees.
It is certainly true that there will be rapid reproduction of some heritable elements in the future. Today we have artificial reproducing systems of various kinds. One type is memes. Another type is companies. They are both potentially long lived and often not too many people mourn their passing. We will probably be able to set things up so that the things that we care about are not the same things as the ones that must die. Today are dark ages in that respect—because dead brains are like burned libraries. In the future, minds will be able to be backed up—so geniunely valuable things are less likely to get lost.
Greg is correct that altruism based on adaptation to small groups of kin can be expected to eventually burn out. However, the large sale of modern virtue signalling and reputations massively compensate for that—Those mechanisms can even create cooperation between total strangers on distant continents. What we are gaining massively exceeds what we are losing.
It’s true that machines with simple value systems will be easier to build. However, machines will only sell to the extent that they do useful work, respect their owners and obey the law. So there will be a big effort to build machines that respect human values starting long before machines get very smart. You can see this today in the form of car air bags, blender safety features, privacy controls—and so on.
I don’t think that it is likely that civilisation will “drop that baton” and suffer a monumental engineering disaster as the result of an accidental runaway superintellligence—though sure, such a possibility is worth bearing in mind. Most others that I am aware of also give such an outcome a relatively low probability—including—AFAICT—Yudkowsky himself. The case for worrying about it is not that it is especially likely, but that it is not impossible—and could potentially be a large loss.
That’s an interesting and helpful summary comment, Carl. I’ll see if I can make some helpful responses to the specific theories listed above—in this comment’s children:
Regarding Robin Hanson’s proposed hypercompetitive Malthusian world:
Hanson imagines lots of small ems—on the grounds that coordination is hard. I am much more inclined to expect large scale structure and governance—in which case the level of competition between the agents can be configured to be whatever the government decrees.
It is certainly true that there will be rapid reproduction of some heritable elements in the future. Today we have artificial reproducing systems of various kinds. One type is memes. Another type is companies. They are both potentially long lived and often not too many people mourn their passing. We will probably be able to set things up so that the things that we care about are not the same things as the ones that must die. Today are dark ages in that respect—because dead brains are like burned libraries. In the future, minds will be able to be backed up—so geniunely valuable things are less likely to get lost.
I don’t often agree with you, but you just convinced me we’re on the same side.
Greg is correct that altruism based on adaptation to small groups of kin can be expected to eventually burn out. However, the large sale of modern virtue signalling and reputations massively compensate for that—Those mechanisms can even create cooperation between total strangers on distant continents. What we are gaining massively exceeds what we are losing.
It’s true that machines with simple value systems will be easier to build. However, machines will only sell to the extent that they do useful work, respect their owners and obey the law. So there will be a big effort to build machines that respect human values starting long before machines get very smart. You can see this today in the form of car air bags, blender safety features, privacy controls—and so on.
I don’t think that it is likely that civilisation will “drop that baton” and suffer a monumental engineering disaster as the result of an accidental runaway superintellligence—though sure, such a possibility is worth bearing in mind. Most others that I am aware of also give such an outcome a relatively low probability—including—AFAICT—Yudkowsky himself. The case for worrying about it is not that it is especially likely, but that it is not impossible—and could potentially be a large loss.