Not “defensible”: probable. Check out the way my post is voted down well below the threshold, though. This appears to be a truth that this community doesn’t want to hear about.
assuming that moral progress exists it doesn’t seem strictly monotonic [...]
Sure. Evolutionary progress is not “strictly monotonic”. Check out the major meteorite strikes—for instance.
I didn’t downvote your comment (or see it until now) but I think you’re mistaken about the reasons for downvoting.
You state a consideration that most everyone is aware of (growth of instrumentally useful science, technology, institutions for organizing productive competitive units, etc). Then you say that it implies a further controversial conclusion that many around here disagree with (despite knowing the consideration very well), completely ignoring the arguments against. And you phrase the conclusion as received fact, misleadingly suggesting that it is not controversial.
If you referenced the counterarguments against your position and your reasons for rejecting them, and acknowledged the extent of (reasoned) disagreement, I don’t think you would have been downvoted (and probably upvoted). This pattern is recurrent across many of your downvoted comments.
Then you say that it implies a further controversial conclusion that many around here disagree with
I’m not quite sure that many around here disagree with it as such; I may be misinterpreting User:timtyler, but the claim isn’t necessarily that arbitrary superintelligences will contribute to “moral progress”, the claim is that the superintelligences that are actually likely to be developed some decades down the line are likely to contribute to “moral progress”. Presumably if SingInst’s memetic strategies succeed or if the sanity waterline rises then this would at least be a reasonable expectation, especially given widely acknowledged uncertainty about the exact extent to which value is fragile and uncertainty about what kinds of AI architectures are likely to win the race. This argument is somewhat different than the usual “AI will necessarily heed the ontologically fundamental moral law” argument, and I’m pretty sure User:timtyler agrees that caution is necessary when working on AGI.
Many here disagree with the conclusion that superintelligences are likely to be super-moral?
If so, I didn’t really know that. The only figure I have ever seen from Yudkowsky for the chance of failure is the rather vague one of “easily larger than 10%”. The “GLOBAL CATASTROPHIC RISKS SURVEY”—presumably a poll of the ultra-paranoid—came with a broadly similar chance of failure by 2100 - far below 50%. Like many others, I figure that, if we don’t fail, then we are likely to succeed.
Do the pessimists have an argument? About the only argument I have seen argues that superintelligences will by psychopaths “by default” since most goal-directed agents are psychopaths. That argument is a feeble one. Similarly, the space of all possible buildings is dominated by piles of rubble—and yet the world is filled with skyscrapers. Looking at evolutionary trends—as I proposed—is a better way of forecasting than looking at the space of possible agents.
Your original comment seemed to be in response to this:
I believe Schmidhuber takes something of a middleground here; he seems to agree with the optimization/compression model of intelligence, and that AIs aren’t necessarily going to be human-friendly, but also thinks that intelligence/compression is fundamentally tied into things like beauty and humor in a way that might make the future less bleak & valueless than SingInst folk tend to picture it.
I.e. your conclusion seemed to be that the products of instrumental reasoning (conducting science, galactic colonization, building factories, etc) and evolutionary competition would be enough to capture most of the potential value of the future. That would make sense in light of your talk about evolution and convergence. If all you mean is that “I think that the combined probability of humans shaping future machine intelligence to be OK by my idiosyncratic standards, or convergent instrumental/evolutionary pressures doing so is above 0.5”, then far fewer folk will have much of a bone to pick with you.
But it seems that there is sharper disagreement on the character or valuation of the product of instrumental/evolutionary forces. I’ll make some distinctions and raise three of the arguments often made.
Some of the patterns of behavior that we call “moral” seem broadly instrumentally rational: building a reputation for tit-for-tat among agents who are too powerful to simply prey upon, use of negotiation to reduce the deadweight loss of conflict, cultivating positive intentions when others can pierce attempts at deception. We might expect that superintelligence would increase effectiveness in those areas, as in others (offsetting increased potential for cheating). Likewise, on an institutional level, superintelligent beings (particularly ones able to establish reputations for copy-clans, make their code transparent, and make binding self-modifications) seem likely to be able to do better than humans in building institutions to coordinate with each other (where that is beneficial). In these areas I am aware of few who do not expect superhuman performance from machine intelligence in the long-term, and there is a clear evolutionary logic to drive improvements in competitive situations, along with he instrumental reasoning of goal-seeking agents.
However, the net effect of these instrumental virtues and institutions depends on the situation and aims of the players. Loyalty and cooperation within the military of Genghis Khan were essential to the death of millions. Instrumental concerns helped to moderate the atrocities (the Khan is said to have originally planned to reduce the settled areas to grassland, but was convinced of the virtues of leaving victims alive to pay tribute again), but also enabled them. When we are interested in the question of how future agents will spend their resources (as opposed to their game-theoretic interactions with powerful potential allies and rivals), or use their “slack” instrumental cooperative skill need not be enough. And we may “grade on a curve”: creatures that dedicate a much smaller portion of their slack to what we see as valuable, but have more resources simply due to technological advance or space colonization, may be graded poorly by comparison to the good that could have been realized by creatures that used most of their slack for good.
One argument that the evolutionary equilibrium will not be very benevolent in its use of slack is made by Greg Cochran here. He argues that much of our wide-scope altruism, of the sort that leads people to help the helpless (distant poor, animals, etc), is less competitive than a more selective sort. Showing kindness to animals may signal to allies that one will treat them well, but at a cost that could be avoided through reputation systems and source code transparency. Wide-scope altruistic tendencies that may have been selected for in small groups (mostly kin and frequent cooperation partners) are now redirected and cause sacrifice to help distant strangers, and would be outcompeted by more focused altruism.
Robin Hanson claims that much of what Westerners today think of as “moral progress” reflects a move to “forager” ideals in the presence of very high levels of per capita wealth and reduced competition. Since he expects a hypercompetitive Malthusian world following from rapid machine intelligence reproduction, he also expects a collapse of much of what moderns view as moral progress.
Eliezer Yudkowsky’s argument is that (idealized) human-preferred use of “slack” resources would be very different from those of AIs that would be easiest to construct, and attractive initially (e.g. AIXI-style sensory utility functions, which can be coded in relatively directly, rather than using complex concepts that have to be learned and revised, and should deliver instrumental cooperation from weak AIs). That is not the same as talk about a randomly selected AI (although the two are not unrelated). Such an AI might dedicate all distant resources to building factories, improving its technology, and similar pursuits, but only to protect a wireheading original core. In contrast a human civilization would use a much larger share of resources to produce happy beings of a sort we would consider morally valuable for their own sakes.
That’s an interesting and helpful summary comment, Carl. I’ll see if I can make some helpful responses to the specific theories listed above—in this comment’s children:
Regarding Robin Hanson’s proposed hypercompetitive Malthusian world:
Hanson imagines lots of small ems—on the grounds that coordination is hard. I am much more inclined to expect large scale structure and governance—in which case the level of competition between the agents can be configured to be whatever the government decrees.
It is certainly true that there will be rapid reproduction of some heritable elements in the future. Today we have artificial reproducing systems of various kinds. One type is memes. Another type is companies. They are both potentially long lived and often not too many people mourn their passing. We will probably be able to set things up so that the things that we care about are not the same things as the ones that must die. Today are dark ages in that respect—because dead brains are like burned libraries. In the future, minds will be able to be backed up—so geniunely valuable things are less likely to get lost.
Greg is correct that altruism based on adaptation to small groups of kin can be expected to eventually burn out. However, the large sale of modern virtue signalling and reputations massively compensate for that—Those mechanisms can even create cooperation between total strangers on distant continents. What we are gaining massively exceeds what we are losing.
It’s true that machines with simple value systems will be easier to build. However, machines will only sell to the extent that they do useful work, respect their owners and obey the law. So there will be a big effort to build machines that respect human values starting long before machines get very smart. You can see this today in the form of car air bags, blender safety features, privacy controls—and so on.
I don’t think that it is likely that civilisation will “drop that baton” and suffer a monumental engineering disaster as the result of an accidental runaway superintellligence—though sure, such a possibility is worth bearing in mind. Most others that I am aware of also give such an outcome a relatively low probability—including—AFAICT—Yudkowsky himself. The case for worrying about it is not that it is especially likely, but that it is not impossible—and could potentially be a large loss.
your conclusion seemed to be that the products of instrumental reasoning (conducting science, galactic colonization, building factories, etc) and evolutionary competition would be enough to capture most of the potential value of the future. That would make sense in light of your talk about evolution and convergence.
I didn’t mean to say anything about “instrumental reasoning”.
I do in fact think that universal instrumental values may well be enough to preserve some humans for the sake of the historical record, but that is a different position on a different topic—from my perspective.
My comment was about evolution. Evolution has produced the value in the present and will produce the value in the future. We are part of the process—and not some kind of alternative to it.
Competition represents the evolutionary process known as natural selection. However there’s more to evolution than natural selection—there’s also symbiosis and mutation. Mutations will be more interesting in the future than they have been in the past—what with the involvement of intelligent design, interpolation, extrapolation, etc.
As it says in Beyond AI, “Intelligence is Good”. The smarter you are, the kinder and more benevolent you tend to be. The idea is supported by game theory, comparisons between animals, comparisons within modern humans, and by moral progress over human history.
We can both see empirically that “Intelligence is Good”, and understand why it is good.
For my own part, I neither find it likely that an arbitrarily selected superintelligence will be “super-moral” given the ordinary connotations of that term, nor that it will be immoral given the ordinary connotations of that term. I do expect it to be amoral by my standards.
That it’s an AI is irrelevant; I conclude much the same thing about arbitrarily selected superintelligent NIs. (Of course, if I artificially limit my selection space to superintelligent humans, my predictions change.)
FWIW, an “arbitrarily selected superintelligence” is not what I meant at all. I was talking about the superintelligences we are likely to see—which will surely not be “arbitrarily selected”.
While thinking about “arbitrarily selected superintelligences” might make superintelligence seem scary, the concept has relatively little to do with reality. It is like discussing arbitrarily selected computer programs. Fun for philosophers—maybe—but not much use for computer scientists or anyone interested in how computer programs actually behave in the real world.
I’ll certainly agree that human-created superintelligences are more likely to be moral in human terms than, say, dolphin-created superintelligences or alien superintelligences.
If I (for example) restrict myself to the class of superintelligences built by computer programmers, it seems reasonable to assume their creators will operate substantively like the computer programmers I’ve worked with (and known at places like MIT’s AI Lab). That assumption leads me to conclude that insofar as they have a morality at all, that morality will be constructed as a kind of test harness around the underlying decision procedure, under the theory that the important problem is making the right decisions given a set of goals. That leads me to expect the morality to be whatever turns out to be easiest to encode and not obviously evil. I’m not sure what the result of that is, but I’d be surprised if I recognized it as moral.
If I instead restrict myself to the class of superintelligences constructed by intelligence augmentation of humans, say, I expect the resulting superintelligence to work out a maximally consistent extension of human moral structures. I expect the result to be recognizably moral as long as we unpack that morality using terms like “systems sufficiently like me” rather than terms like “human beings.” Given how humans treat systems as much unlike us as unaugmented humans are unlike superintelligent humans, I’m not looking forward to that either.
So… I dunno. I’m reluctant to make any especially confident statement about the morality of human-created superintelligences, but I certainly don’t consider “super-moral” some kind of default condition that we’re more likely to end up in than we are to miss.
Meteor strikes aren’t an example of non-monotonic progress in evolution, are they? I mean, in terms of fitness/adaptedness to environment, meteor strikes are just an extreme examples of the way “the environment” is a moving target. Most people here, I think, would say morality is a moving target as well, and our current norms only look like progress from where we’re standing (except for the parts that we can afford now better than in the EEA, like welfare and avoiding child labor).
Meteor strikes aren’t an example of non-monotonic progress in evolution, are they?
Yes, they are. Living systems are dissipative processes. They maximise entropy production. The biosphere is an optimisation process with a clear direction. Major meteorite strikes are normally large setbacks—since a lot of information about how to dissipate energy gradients is permanently lost—reducing the biosphere’s capabilities relating to maximising entropy increase.
Most people here, I think, would say morality is a moving target as well, and our current norms only look like progress from where we’re standing (except for the parts that we can afford now better than in the EEA, like welfare and avoiding child labor).
Not stoning, flogging, killing, raping and stealing from each other quite so much is moral progress too. Those were bad way back when as well—but they happened more.
Game theory seems to be quite clear about there being a concrete sense in which some moral systems are “better” than others.
I think people can’t disentangle your factual claim from what they perceive to be the implication that we shouldn’t be careful when trying to engineer AGIs. I’m not really sure that they would strongly disagree with the factual claim on its own. It seems clear that something like progress has happened up until the dawn of humans; but I’d argue that it reached its zenith sometime between 100,000 and 500 years ago, and that technology has overall led to a downturn in the morality of the common man. But it might be that I should focus on the heights rather than the averages.
I think people can’t disentangle your factual claim from what they perceive to be the implication that we shouldn’t be careful when trying to engineer AGIs.
Hmm—no such implication was intended.
It seems clear that something like progress has happened up until the dawn of humans; but I’d argue that it reached its zenith sometime between 100,000 and 500 years ago, and that technology has overall led to a downturn in the morality of the common man.
The end of slavery and a big downturn in warfare and violence occured on those timescales. For example, Steven Pinker would not agree with you. In his recent book he says that the pace of moral progress has accelerated in the last few decades. Pinker notes that on issues such as civil rights, the role of women, equality for gays, beating of children and treatment of animals, “the attitudes of conservatives have followed the trajectory of liberals, with the result that today’s conservatives are more liberal than yesterday’s liberals.”
Not “defensible”: probable. Check out the way my post is voted down well below the threshold, though. This appears to be a truth that this community doesn’t want to hear about.
Sure. Evolutionary progress is not “strictly monotonic”. Check out the major meteorite strikes—for instance.
I didn’t downvote your comment (or see it until now) but I think you’re mistaken about the reasons for downvoting.
You state a consideration that most everyone is aware of (growth of instrumentally useful science, technology, institutions for organizing productive competitive units, etc). Then you say that it implies a further controversial conclusion that many around here disagree with (despite knowing the consideration very well), completely ignoring the arguments against. And you phrase the conclusion as received fact, misleadingly suggesting that it is not controversial.
If you referenced the counterarguments against your position and your reasons for rejecting them, and acknowledged the extent of (reasoned) disagreement, I don’t think you would have been downvoted (and probably upvoted). This pattern is recurrent across many of your downvoted comments.
I’m not quite sure that many around here disagree with it as such; I may be misinterpreting User:timtyler, but the claim isn’t necessarily that arbitrary superintelligences will contribute to “moral progress”, the claim is that the superintelligences that are actually likely to be developed some decades down the line are likely to contribute to “moral progress”. Presumably if SingInst’s memetic strategies succeed or if the sanity waterline rises then this would at least be a reasonable expectation, especially given widely acknowledged uncertainty about the exact extent to which value is fragile and uncertainty about what kinds of AI architectures are likely to win the race. This argument is somewhat different than the usual “AI will necessarily heed the ontologically fundamental moral law” argument, and I’m pretty sure User:timtyler agrees that caution is necessary when working on AGI.
Many here disagree with the conclusion that superintelligences are likely to be super-moral?
If so, I didn’t really know that. The only figure I have ever seen from Yudkowsky for the chance of failure is the rather vague one of “easily larger than 10%”. The “GLOBAL CATASTROPHIC RISKS SURVEY”—presumably a poll of the ultra-paranoid—came with a broadly similar chance of failure by 2100 - far below 50%. Like many others, I figure that, if we don’t fail, then we are likely to succeed.
Do the pessimists have an argument? About the only argument I have seen argues that superintelligences will by psychopaths “by default” since most goal-directed agents are psychopaths. That argument is a feeble one. Similarly, the space of all possible buildings is dominated by piles of rubble—and yet the world is filled with skyscrapers. Looking at evolutionary trends—as I proposed—is a better way of forecasting than looking at the space of possible agents.
Your original comment seemed to be in response to this:
I.e. your conclusion seemed to be that the products of instrumental reasoning (conducting science, galactic colonization, building factories, etc) and evolutionary competition would be enough to capture most of the potential value of the future. That would make sense in light of your talk about evolution and convergence. If all you mean is that “I think that the combined probability of humans shaping future machine intelligence to be OK by my idiosyncratic standards, or convergent instrumental/evolutionary pressures doing so is above 0.5”, then far fewer folk will have much of a bone to pick with you.
But it seems that there is sharper disagreement on the character or valuation of the product of instrumental/evolutionary forces. I’ll make some distinctions and raise three of the arguments often made.
Some of the patterns of behavior that we call “moral” seem broadly instrumentally rational: building a reputation for tit-for-tat among agents who are too powerful to simply prey upon, use of negotiation to reduce the deadweight loss of conflict, cultivating positive intentions when others can pierce attempts at deception. We might expect that superintelligence would increase effectiveness in those areas, as in others (offsetting increased potential for cheating). Likewise, on an institutional level, superintelligent beings (particularly ones able to establish reputations for copy-clans, make their code transparent, and make binding self-modifications) seem likely to be able to do better than humans in building institutions to coordinate with each other (where that is beneficial). In these areas I am aware of few who do not expect superhuman performance from machine intelligence in the long-term, and there is a clear evolutionary logic to drive improvements in competitive situations, along with he instrumental reasoning of goal-seeking agents.
However, the net effect of these instrumental virtues and institutions depends on the situation and aims of the players. Loyalty and cooperation within the military of Genghis Khan were essential to the death of millions. Instrumental concerns helped to moderate the atrocities (the Khan is said to have originally planned to reduce the settled areas to grassland, but was convinced of the virtues of leaving victims alive to pay tribute again), but also enabled them. When we are interested in the question of how future agents will spend their resources (as opposed to their game-theoretic interactions with powerful potential allies and rivals), or use their “slack” instrumental cooperative skill need not be enough. And we may “grade on a curve”: creatures that dedicate a much smaller portion of their slack to what we see as valuable, but have more resources simply due to technological advance or space colonization, may be graded poorly by comparison to the good that could have been realized by creatures that used most of their slack for good.
One argument that the evolutionary equilibrium will not be very benevolent in its use of slack is made by Greg Cochran here. He argues that much of our wide-scope altruism, of the sort that leads people to help the helpless (distant poor, animals, etc), is less competitive than a more selective sort. Showing kindness to animals may signal to allies that one will treat them well, but at a cost that could be avoided through reputation systems and source code transparency. Wide-scope altruistic tendencies that may have been selected for in small groups (mostly kin and frequent cooperation partners) are now redirected and cause sacrifice to help distant strangers, and would be outcompeted by more focused altruism.
Robin Hanson claims that much of what Westerners today think of as “moral progress” reflects a move to “forager” ideals in the presence of very high levels of per capita wealth and reduced competition. Since he expects a hypercompetitive Malthusian world following from rapid machine intelligence reproduction, he also expects a collapse of much of what moderns view as moral progress.
Eliezer Yudkowsky’s argument is that (idealized) human-preferred use of “slack” resources would be very different from those of AIs that would be easiest to construct, and attractive initially (e.g. AIXI-style sensory utility functions, which can be coded in relatively directly, rather than using complex concepts that have to be learned and revised, and should deliver instrumental cooperation from weak AIs). That is not the same as talk about a randomly selected AI (although the two are not unrelated). Such an AI might dedicate all distant resources to building factories, improving its technology, and similar pursuits, but only to protect a wireheading original core. In contrast a human civilization would use a much larger share of resources to produce happy beings of a sort we would consider morally valuable for their own sakes.
That’s an interesting and helpful summary comment, Carl. I’ll see if I can make some helpful responses to the specific theories listed above—in this comment’s children:
Regarding Robin Hanson’s proposed hypercompetitive Malthusian world:
Hanson imagines lots of small ems—on the grounds that coordination is hard. I am much more inclined to expect large scale structure and governance—in which case the level of competition between the agents can be configured to be whatever the government decrees.
It is certainly true that there will be rapid reproduction of some heritable elements in the future. Today we have artificial reproducing systems of various kinds. One type is memes. Another type is companies. They are both potentially long lived and often not too many people mourn their passing. We will probably be able to set things up so that the things that we care about are not the same things as the ones that must die. Today are dark ages in that respect—because dead brains are like burned libraries. In the future, minds will be able to be backed up—so geniunely valuable things are less likely to get lost.
I don’t often agree with you, but you just convinced me we’re on the same side.
Greg is correct that altruism based on adaptation to small groups of kin can be expected to eventually burn out. However, the large sale of modern virtue signalling and reputations massively compensate for that—Those mechanisms can even create cooperation between total strangers on distant continents. What we are gaining massively exceeds what we are losing.
It’s true that machines with simple value systems will be easier to build. However, machines will only sell to the extent that they do useful work, respect their owners and obey the law. So there will be a big effort to build machines that respect human values starting long before machines get very smart. You can see this today in the form of car air bags, blender safety features, privacy controls—and so on.
I don’t think that it is likely that civilisation will “drop that baton” and suffer a monumental engineering disaster as the result of an accidental runaway superintellligence—though sure, such a possibility is worth bearing in mind. Most others that I am aware of also give such an outcome a relatively low probability—including—AFAICT—Yudkowsky himself. The case for worrying about it is not that it is especially likely, but that it is not impossible—and could potentially be a large loss.
I didn’t mean to say anything about “instrumental reasoning”.
I do in fact think that universal instrumental values may well be enough to preserve some humans for the sake of the historical record, but that is a different position on a different topic—from my perspective.
My comment was about evolution. Evolution has produced the value in the present and will produce the value in the future. We are part of the process—and not some kind of alternative to it.
Competition represents the evolutionary process known as natural selection. However there’s more to evolution than natural selection—there’s also symbiosis and mutation. Mutations will be more interesting in the future than they have been in the past—what with the involvement of intelligent design, interpolation, extrapolation, etc.
Regarding: cooperation within the military of Genghis Khan: I don’t think that is the bigger picture.
The bigger picture is more like: Robert Wright: How cooperation (eventually) trumps conflict
As it says in Beyond AI, “Intelligence is Good”. The smarter you are, the kinder and more benevolent you tend to be. The idea is supported by game theory, comparisons between animals, comparisons within modern humans, and by moral progress over human history.
We can both see empirically that “Intelligence is Good”, and understand why it is good.
For my own part, I neither find it likely that an arbitrarily selected superintelligence will be “super-moral” given the ordinary connotations of that term, nor that it will be immoral given the ordinary connotations of that term. I do expect it to be amoral by my standards.
That it’s an AI is irrelevant; I conclude much the same thing about arbitrarily selected superintelligent NIs. (Of course, if I artificially limit my selection space to superintelligent humans, my predictions change.)
FWIW, an “arbitrarily selected superintelligence” is not what I meant at all. I was talking about the superintelligences we are likely to see—which will surely not be “arbitrarily selected”.
While thinking about “arbitrarily selected superintelligences” might make superintelligence seem scary, the concept has relatively little to do with reality. It is like discussing arbitrarily selected computer programs. Fun for philosophers—maybe—but not much use for computer scientists or anyone interested in how computer programs actually behave in the real world.
I’ll certainly agree that human-created superintelligences are more likely to be moral in human terms than, say, dolphin-created superintelligences or alien superintelligences.
If I (for example) restrict myself to the class of superintelligences built by computer programmers, it seems reasonable to assume their creators will operate substantively like the computer programmers I’ve worked with (and known at places like MIT’s AI Lab). That assumption leads me to conclude that insofar as they have a morality at all, that morality will be constructed as a kind of test harness around the underlying decision procedure, under the theory that the important problem is making the right decisions given a set of goals. That leads me to expect the morality to be whatever turns out to be easiest to encode and not obviously evil. I’m not sure what the result of that is, but I’d be surprised if I recognized it as moral.
If I instead restrict myself to the class of superintelligences constructed by intelligence augmentation of humans, say, I expect the resulting superintelligence to work out a maximally consistent extension of human moral structures. I expect the result to be recognizably moral as long as we unpack that morality using terms like “systems sufficiently like me” rather than terms like “human beings.” Given how humans treat systems as much unlike us as unaugmented humans are unlike superintelligent humans, I’m not looking forward to that either.
So… I dunno. I’m reluctant to make any especially confident statement about the morality of human-created superintelligences, but I certainly don’t consider “super-moral” some kind of default condition that we’re more likely to end up in than we are to miss.
Meteor strikes aren’t an example of non-monotonic progress in evolution, are they? I mean, in terms of fitness/adaptedness to environment, meteor strikes are just an extreme examples of the way “the environment” is a moving target. Most people here, I think, would say morality is a moving target as well, and our current norms only look like progress from where we’re standing (except for the parts that we can afford now better than in the EEA, like welfare and avoiding child labor).
Yes, they are. Living systems are dissipative processes. They maximise entropy production. The biosphere is an optimisation process with a clear direction. Major meteorite strikes are normally large setbacks—since a lot of information about how to dissipate energy gradients is permanently lost—reducing the biosphere’s capabilities relating to maximising entropy increase.
Not stoning, flogging, killing, raping and stealing from each other quite so much is moral progress too. Those were bad way back when as well—but they happened more.
Game theory seems to be quite clear about there being a concrete sense in which some moral systems are “better” than others.
Good example.
I think people can’t disentangle your factual claim from what they perceive to be the implication that we shouldn’t be careful when trying to engineer AGIs. I’m not really sure that they would strongly disagree with the factual claim on its own. It seems clear that something like progress has happened up until the dawn of humans; but I’d argue that it reached its zenith sometime between 100,000 and 500 years ago, and that technology has overall led to a downturn in the morality of the common man. But it might be that I should focus on the heights rather than the averages.
Hmm—no such implication was intended.
The end of slavery and a big downturn in warfare and violence occured on those timescales. For example, Steven Pinker would not agree with you. In his recent book he says that the pace of moral progress has accelerated in the last few decades. Pinker notes that on issues such as civil rights, the role of women, equality for gays, beating of children and treatment of animals, “the attitudes of conservatives have followed the trajectory of liberals, with the result that today’s conservatives are more liberal than yesterday’s liberals.”