Thanks! The post was successful then. Your point about stickiness is a good one; perhaps I was wrong to emphasize the change in number of ideologies.
The “AI takeover without AGI or agency” bit was a mistake in retrospect. I don’t remember why I wrote it, but I think it was a reference to this post which argues that what we really care about is AI-PONR, and AI takeover is just a prominent special case. It also might have been due to the fact that a world in which an ideology uses AI tools to cement itself and take over the world, can be thought of as a case of AI takeover, since we have AIs bossing everyone around and getting them to do bad things that ultimately lead to x-risk. It’s just a weird case in which the AIs aren’t agents or general intelligences. :)
I also don’t really see the situation as about AI at all. It’s a structural advantage for certain kinds of values that tend to win out in memetic competition / tend to be easiest to persuade people to adopt / etc. Let’s call such values themselves “attractive.”
The most attractive values given a new technological/social situation are likely to be similar to those given the immediately preceding situation, so I’d generally expect the most attractive values to generally be endemic anyway or close enough to endemic values that they don’t look like they are coming out of left field.
And of course for any given zero-sum conflict and any given human, one of the participants in that conflict would prefer push the human towards more attractive values, so they would be introduced even if not initially endemic.
I don’t think you can get paperclips this way, because people trying to get humans to maximize paperclips would be at a big disadvantage in memetic competition compared with the most attractive values (or even compared to more normal human values, which are presumably more attractive than random stuff).
Then the usual hope is that we are happy with attractive values, e.g. because deliberation and intentional behavior by humans makes “smarter” forms of current values more attractive relative to random bad stuff. And your concern is basically that under distributional shift, why should we think that?
Or perhaps more clearly: if which values are “most attractive” depends on features of the technological landscape, then it’s hard to see why we should be happy just to “take the hand we’re dealt” and be happy with the values that are most attractive on some default technological trajectory. Instead, we would end up with preferences over the technological trajectory.
This is not really distinctive to persuasion, it applies just as well to any changes in the environment that would change the process of deliberation/discussion. The hypothesis seems to be that “how good humans are at persuasion” is just a particularly important/significant kind of shift.
But it seems like what really matters is some ratio between how good you are at persuasion and how good you are at other skills that shape the future (or else perhaps you should be much more concerned about other increases in human capability, like education, that make us better at arguing). And in this sense it’s less clear whether AI is better or worse than the status quo. I guess the main thing is that it’s a candidate for a sharp distributional change and so that’s the kind of thing that you would want to be unusually cautious about.
I mostly think the most robust thing is that it’s reasonable to be very interested in the trajectory of values, to think about how much you like the process of deliberation and discourse and selection and so on that shapes those values, and to think of changes as potentially irreversible (since future people would have no interest in reversing them).
The usual response to this argument is that perhaps future values are basically unrelated to present values anyway (since they will also converge to whatever values are most attractive given future technological situations). But this seems relatively unpersuasive because eventually you might expect to have many agents who try to deliberately make the future good rather than letting what happens, happen, and that this could eventually drive the rate of drift to 0. This seems fairly likely to happen eventually, but you might think that it will take long enough that existing value changes will still wash out.
Then we end up with a complicated set of moral / decision-theoretic questions about which values we are happy enough with. It’s not really clear to me how you should feel about variation across humans, or across cultures, or for humans in new technological situations, or for a particular kind of deep RL, or what. It seems quite clear that we should care some, and I think given realistic treatments of moral uncertainty you should not care too much more about preventing drift than about preventing extinction given drift (e.g. 10x seems very hard to justify to me). But it generally seems like one of the more pressing questions in moral philosophy, and even if you care equally about those two things (suggesting that you’d value some drifted future population’s values 50% as much as some kind of hypothetical ideal realization) you could still get much more traction by trying to prevent forms of drift that we don’t endorse.
given realistic treatments of moral uncertainty you should not care too much more about preventing drift than about preventing extinction given drift (e.g. 10x seems very hard to justify to me).
I think you already believe this, but just to clarify: this “extinction” is about the extinction of Earth-originating intelligence, not about humans in particular. So AI alignment is an intervention to prevent drift, not an intervention to prevent extinction. (Though of course, we could care differently about persuasion-tool-induced drift vs unaliged-AI-induced drift.)
Thanks for this! Re: it’s not really about AI, it’s about memetics & ideologies: Yep, totally agree. (The OP puts the emphasis on the memetic ecosystem & thinks of persuasion tools as a change in the fitness landscape. Also, I wrote this story a while back.) What follows is a point-by-point response:
The most attractive values given a new technological/social situation are likely to be similar to those given the immediately preceding situation, so I’d generally expect the most attractive values to generally be endemic anyway or close enough to endemic values that they don’t look like they are coming out of left field.
Maybe? I am not sure memetic evolution works this fast though. Think about how biological evolution doesn’t adapt immediately to changes in environment, it takes thousands of years at least, arguably millions depending on what counts as “fully adapted” to the new environment. Replication times for memes are orders of magnitude faster, but that just means it should take a few orders of magnitude less time… and during e.g. a slow takeoff scenario there might just not be that much time. (Disclaimer: I’m ignorant of the math behind this sort of thing). Basically, as tech and economic progress speeds up but memetic evolution stays constant, we should expect there to be some point where the former outstrips the latter and the environment is changing faster than the attractive-memes-for-the-environment can appear and become endemic. Now of course memetic evolution is speeding up too, but the point is that until further argument I’m not 100% convinced that we aren’t already out-of-equilibrium.
And of course for any given zero-sum conflict and any given human, one of the participants in that conflict would prefer push the human towards more attractive values, so they would be introduced even if not initially endemic.
Not sure this argument works. First of all, very few conflicts are actually zero sum. Usually there are some world-states that are worse by both players’ lights than some other world-states. Humans being in the most attractive memetic state may be like this.
I don’t think you can get paperclips this way, because people trying to get humans to maximize paperclips would be at a big disadvantage in memetic competition compared with the most attractive values (or even compared to more normal human values, which are presumably more attractive than random stuff).
Agreed.
Then the usual hope is that we are happy with attractive values, e.g. because deliberation and intentional behavior by humans makes “smarter” forms of current values more attractive relative to random bad stuff. And your concern is basically that under distributional shift, why should we think that?
Agreed. I would add that even without distributional shift it is unclear why we should expect attractive values to be good. (Maybe the idea is that good = current values because moral antirealism, and current values are the attractive ones for the current environment via the argument above? I guess I’d want that argument spelled out more and the premises argued for.)
Or perhaps more clearly: if which values are “most attractive” depends on features of the technological landscape, then it’s hard to see why we should be happy just to “take the hand we’re dealt” and be happy with the values that are most attractive on some default technological trajectory. Instead, we would end up with preferences over the technological trajectory.
Yes.
This is not really distinctive to persuasion, it applies just as well to any changes in the environment that would change the process of deliberation/discussion. The hypothesis seems to be that “how good humans are at persuasion” is just a particularly important/significant kind of shift.
Yes? I think it’s particularly important for reasons discussed in the “speculation” section, and because it seems to be in our immediate future and indeed our present. Basically, persuasion tools make ideologies (:= a particular kind of memeplex) stronger and stickier, and they change the landscape so that the ideologies that control the tech platforms have a significant advantage.
But it seems like what really matters is some ratio between how good you are at persuasion and how good you are at other skills that shape the future (or else perhaps you should be much more concerned about other increases in human capability, like education, that make us better at arguing). And in this sense it’s less clear whether AI is better or worse than the status quo. I guess the main thing is that it’s a candidate for a sharp distributional change and so that’s the kind of thing that you would want to be unusually cautious about.
Has education increased much recently? Not in a way that’s made us significantly more rational as a group, as far as I can tell. Changes in the US education system over the last 20 years presumably made some difference, but they haven’t exactly put us on a bright path towards rational discussion of important issues. My guess is that the effect size is swamped by larger effects from the Internet.
I mostly think the most robust thing is that it’s reasonable to be very interested in the trajectory of values, to think about how much you like the process of deliberation and discourse and selection and so on that shapes those values, and to think of changes as potentially irreversible (since future people would have no interest in reversing them).
The usual response to this argument is that perhaps future values are basically unrelated to present values anyway (since they will also converge to whatever values are most attractive given future technological situations). But this seems relatively unpersuasive because eventually you might expect to have many agents who try to deliberately make the future good rather than letting what happens, happen, and that this could eventually drive the rate of drift to 0. This seems fairly likely to happen eventually, but you might think that it will take long enough that existing value changes will still wash out.
Then we end up with a complicated set of moral / decision-theoretic questions about which values we are happy enough with. It’s not really clear to me how you should feel about variation across humans, or across cultures, or for humans in new technological situations, or for a particular kind of deep RL, or what. It seems quite clear that we should care some, and I think given realistic treatments of moral uncertainty you should not care too much more about preventing drift than about preventing extinction given drift (e.g. 10x seems very hard to justify to me). But it generally seems like one of the more pressing questions in moral philosophy, and even if you care equally about those two things (suggesting that you’d value some drifted future population’s values 50% as much as some kind of hypothetical ideal realization) you could still get much more traction by trying to prevent forms of drift that we don’t endorse.
I agree that way of thinking about it seems useful and worthwhile. Are you also implying that thinking specifically about the effects of persuasion tools is not so useful or worthwhile?
I should say btw that you’ve been talking about values but I meant to talk about beliefs as well as values. Memes, in general. Beliefs can get feedback from reality more easily and thus hopefully the attractive beliefs are more likely to be good than the attractive values. But even so, there is room to wonder whether the attractive beliefs for a given environment will all be true… so far, for example, plenty of false beliefs seem to be pretty attractive...
If a very persuasive agent AGI were to take over the world by persuading humans to do its bidding (e.g. maximize paperclips), this would count as an AI takeover scenario. The boots on the ground, the “muscle,” would be human. And the brains behind the steering wheels and control panels would be human. And even the brains behind the tech R&D, the financial management, etc. -- even they would be human! The world would look very human and it would look like it was just one group of humans conquering the others. Yet it would still be fair to say it was an AI takeover… because the humans are ultimately controlled by, and doing the bidding of, the AGI.
OK, now what if it isn’t an agent AGI at all? What if it’s just a persuasion tool, and the humans (stupidly) used it on themselves, e.g. as a joke they program the tool to persuade people to maximize paperclips, and they test it on themselves, and it works surprisingly well, and in a temporary fit of paperclip-maximization the humans decide to constantly use the tool on themselves & upgrade it, thus avoiding “value drift” away from paperclip-maximization… Then we have a scenario that looks very similar to the first scenario, with a growing group of paperclip-maximizing humans conquering the rest of the world, all under the control of an AI, except that whereas in the first scenario the muscle, steering, and R&D was done by humans rather than AI, in this scenario the “agenty bits” such as planning and strategic understanding are also done by humans! It still counts as an AI takeover, I say, because an AI is making a group of humans conquer the world and reshape it according to inhuman values.
Of course the second scenario is super unrealistic—humans won’t be so stupid as to use their persuasion tools on themselves, right? Well… they probably won’t try to persuade themselves to maximize paperclips, and if they did it probably wouldn’t work because persuasion tools won’t be that effective (at least at first.) But some (many?) humans probably WILL use their persuasion tools on themselves, to persuade themselves to be truer, more faithful, more radical believers in whatever ideology they already subscribe to. Persuasion tools don’t have to be that powerful to have an effect here; even a single-digit-percentage-point effect size on various metrics would have a big impact, I think, on society.
Persuasion tools will take as input a payload—some worldview, some set of statements, some set of goals/values—and then work to create an expanding faction of people who are dogmatically committed to that payload. (The people who are using said tools with said input on themselves.)
I think it’s an understatement to say that the vast majority of people who use persuasion tools on themselves in this manner will be imbibing payloads that aren’t 100% true and good. Mistakes happen; in the past, even the great philosophers were wrong about some things, surely we are all wrong about some things today, even some things we feel very confident are true/good. I’d bet that it’s not merely the vast majority, but literally everyone!
So this situation seems both realistic to me (unfortunately) and also fairly described as a case of AI takeover (though certainly a non-central case. And I don’t care about the terminology we use here, I just think it’s amusing.)
Thanks! The post was successful then. Your point about stickiness is a good one; perhaps I was wrong to emphasize the change in number of ideologies.
The “AI takeover without AGI or agency” bit was a mistake in retrospect. I don’t remember why I wrote it, but I think it was a reference to this post which argues that what we really care about is AI-PONR, and AI takeover is just a prominent special case. It also might have been due to the fact that a world in which an ideology uses AI tools to cement itself and take over the world, can be thought of as a case of AI takeover, since we have AIs bossing everyone around and getting them to do bad things that ultimately lead to x-risk. It’s just a weird case in which the AIs aren’t agents or general intelligences. :)
I also don’t really see the situation as about AI at all. It’s a structural advantage for certain kinds of values that tend to win out in memetic competition / tend to be easiest to persuade people to adopt / etc. Let’s call such values themselves “attractive.”
The most attractive values given a new technological/social situation are likely to be similar to those given the immediately preceding situation, so I’d generally expect the most attractive values to generally be endemic anyway or close enough to endemic values that they don’t look like they are coming out of left field.
And of course for any given zero-sum conflict and any given human, one of the participants in that conflict would prefer push the human towards more attractive values, so they would be introduced even if not initially endemic.
I don’t think you can get paperclips this way, because people trying to get humans to maximize paperclips would be at a big disadvantage in memetic competition compared with the most attractive values (or even compared to more normal human values, which are presumably more attractive than random stuff).
Then the usual hope is that we are happy with attractive values, e.g. because deliberation and intentional behavior by humans makes “smarter” forms of current values more attractive relative to random bad stuff. And your concern is basically that under distributional shift, why should we think that?
Or perhaps more clearly: if which values are “most attractive” depends on features of the technological landscape, then it’s hard to see why we should be happy just to “take the hand we’re dealt” and be happy with the values that are most attractive on some default technological trajectory. Instead, we would end up with preferences over the technological trajectory.
This is not really distinctive to persuasion, it applies just as well to any changes in the environment that would change the process of deliberation/discussion. The hypothesis seems to be that “how good humans are at persuasion” is just a particularly important/significant kind of shift.
But it seems like what really matters is some ratio between how good you are at persuasion and how good you are at other skills that shape the future (or else perhaps you should be much more concerned about other increases in human capability, like education, that make us better at arguing). And in this sense it’s less clear whether AI is better or worse than the status quo. I guess the main thing is that it’s a candidate for a sharp distributional change and so that’s the kind of thing that you would want to be unusually cautious about.
I mostly think the most robust thing is that it’s reasonable to be very interested in the trajectory of values, to think about how much you like the process of deliberation and discourse and selection and so on that shapes those values, and to think of changes as potentially irreversible (since future people would have no interest in reversing them).
The usual response to this argument is that perhaps future values are basically unrelated to present values anyway (since they will also converge to whatever values are most attractive given future technological situations). But this seems relatively unpersuasive because eventually you might expect to have many agents who try to deliberately make the future good rather than letting what happens, happen, and that this could eventually drive the rate of drift to 0. This seems fairly likely to happen eventually, but you might think that it will take long enough that existing value changes will still wash out.
Then we end up with a complicated set of moral / decision-theoretic questions about which values we are happy enough with. It’s not really clear to me how you should feel about variation across humans, or across cultures, or for humans in new technological situations, or for a particular kind of deep RL, or what. It seems quite clear that we should care some, and I think given realistic treatments of moral uncertainty you should not care too much more about preventing drift than about preventing extinction given drift (e.g. 10x seems very hard to justify to me). But it generally seems like one of the more pressing questions in moral philosophy, and even if you care equally about those two things (suggesting that you’d value some drifted future population’s values 50% as much as some kind of hypothetical ideal realization) you could still get much more traction by trying to prevent forms of drift that we don’t endorse.
I think you already believe this, but just to clarify: this “extinction” is about the extinction of Earth-originating intelligence, not about humans in particular. So AI alignment is an intervention to prevent drift, not an intervention to prevent extinction. (Though of course, we could care differently about persuasion-tool-induced drift vs unaliged-AI-induced drift.)
Thanks for this! Re: it’s not really about AI, it’s about memetics & ideologies: Yep, totally agree. (The OP puts the emphasis on the memetic ecosystem & thinks of persuasion tools as a change in the fitness landscape. Also, I wrote this story a while back.) What follows is a point-by-point response:
Maybe? I am not sure memetic evolution works this fast though. Think about how biological evolution doesn’t adapt immediately to changes in environment, it takes thousands of years at least, arguably millions depending on what counts as “fully adapted” to the new environment. Replication times for memes are orders of magnitude faster, but that just means it should take a few orders of magnitude less time… and during e.g. a slow takeoff scenario there might just not be that much time. (Disclaimer: I’m ignorant of the math behind this sort of thing). Basically, as tech and economic progress speeds up but memetic evolution stays constant, we should expect there to be some point where the former outstrips the latter and the environment is changing faster than the attractive-memes-for-the-environment can appear and become endemic. Now of course memetic evolution is speeding up too, but the point is that until further argument I’m not 100% convinced that we aren’t already out-of-equilibrium.
Not sure this argument works. First of all, very few conflicts are actually zero sum. Usually there are some world-states that are worse by both players’ lights than some other world-states. Humans being in the most attractive memetic state may be like this.
Agreed.
Agreed. I would add that even without distributional shift it is unclear why we should expect attractive values to be good. (Maybe the idea is that good = current values because moral antirealism, and current values are the attractive ones for the current environment via the argument above? I guess I’d want that argument spelled out more and the premises argued for.)
Yes.
Yes? I think it’s particularly important for reasons discussed in the “speculation” section, and because it seems to be in our immediate future and indeed our present. Basically, persuasion tools make ideologies (:= a particular kind of memeplex) stronger and stickier, and they change the landscape so that the ideologies that control the tech platforms have a significant advantage.
Has education increased much recently? Not in a way that’s made us significantly more rational as a group, as far as I can tell. Changes in the US education system over the last 20 years presumably made some difference, but they haven’t exactly put us on a bright path towards rational discussion of important issues. My guess is that the effect size is swamped by larger effects from the Internet.
I agree that way of thinking about it seems useful and worthwhile. Are you also implying that thinking specifically about the effects of persuasion tools is not so useful or worthwhile?
I should say btw that you’ve been talking about values but I meant to talk about beliefs as well as values. Memes, in general. Beliefs can get feedback from reality more easily and thus hopefully the attractive beliefs are more likely to be good than the attractive values. But even so, there is room to wonder whether the attractive beliefs for a given environment will all be true… so far, for example, plenty of false beliefs seem to be pretty attractive...
To elaborate on this idea a bit more:
If a very persuasive agent AGI were to take over the world by persuading humans to do its bidding (e.g. maximize paperclips), this would count as an AI takeover scenario. The boots on the ground, the “muscle,” would be human. And the brains behind the steering wheels and control panels would be human. And even the brains behind the tech R&D, the financial management, etc. -- even they would be human! The world would look very human and it would look like it was just one group of humans conquering the others. Yet it would still be fair to say it was an AI takeover… because the humans are ultimately controlled by, and doing the bidding of, the AGI.
OK, now what if it isn’t an agent AGI at all? What if it’s just a persuasion tool, and the humans (stupidly) used it on themselves, e.g. as a joke they program the tool to persuade people to maximize paperclips, and they test it on themselves, and it works surprisingly well, and in a temporary fit of paperclip-maximization the humans decide to constantly use the tool on themselves & upgrade it, thus avoiding “value drift” away from paperclip-maximization… Then we have a scenario that looks very similar to the first scenario, with a growing group of paperclip-maximizing humans conquering the rest of the world, all under the control of an AI, except that whereas in the first scenario the muscle, steering, and R&D was done by humans rather than AI, in this scenario the “agenty bits” such as planning and strategic understanding are also done by humans! It still counts as an AI takeover, I say, because an AI is making a group of humans conquer the world and reshape it according to inhuman values.
Of course the second scenario is super unrealistic—humans won’t be so stupid as to use their persuasion tools on themselves, right? Well… they probably won’t try to persuade themselves to maximize paperclips, and if they did it probably wouldn’t work because persuasion tools won’t be that effective (at least at first.) But some (many?) humans probably WILL use their persuasion tools on themselves, to persuade themselves to be truer, more faithful, more radical believers in whatever ideology they already subscribe to. Persuasion tools don’t have to be that powerful to have an effect here; even a single-digit-percentage-point effect size on various metrics would have a big impact, I think, on society.
Persuasion tools will take as input a payload—some worldview, some set of statements, some set of goals/values—and then work to create an expanding faction of people who are dogmatically committed to that payload. (The people who are using said tools with said input on themselves.)
I think it’s an understatement to say that the vast majority of people who use persuasion tools on themselves in this manner will be imbibing payloads that aren’t 100% true and good. Mistakes happen; in the past, even the great philosophers were wrong about some things, surely we are all wrong about some things today, even some things we feel very confident are true/good. I’d bet that it’s not merely the vast majority, but literally everyone!
So this situation seems both realistic to me (unfortunately) and also fairly described as a case of AI takeover (though certainly a non-central case. And I don’t care about the terminology we use here, I just think it’s amusing.)