Thinking that high-fidelity WBE, magically dropped in our laps, would be a big gain is quite different from thinking that pushing WBE development will make us safer. Many people who have considered these questions buy the first claim, but not the second, since the neuroscience needed for WBE can enable AGI first (“airplanes before ornithopters,” etc).
Eliezer has argued that:
1) High-fidelity emulations of specific people give better odds of avoiding existential risk than a distribution over “other AI, Friendly or not.”
2) If you push forward the enabling neuroscience and neuroimaging for brain emulation you’re more likely to get brain-inspired AI or low-fidelity emulations first, which are unlikely to be safe, and a lot worse than high-fidelity emulations or Friendly AI.
3) Pushing forward the enabling technologies of WBE, in accelerating timelines, leaves less time for safety efforts to grow and work before AI, or for better information-gathering on which path to push.
If you push forward the enabling neuroscience and neuroimaging for brain emulation you’re more likely to get brain-inspired AI or low-fidelity emulations first, which are unlikely to be safe, and a lot worse than high-fidelity emulations or Friendly AI.
What about pushing on neuroscience and neuroimaging hard enough so that when there is enough computing power to do brain-inspired AI or low-fidelity emulations, the technology for high-fidelity emulations will already be available, so people will have little reason to do brain-inspired AI or low-fidelity emulations (especially if we heavily publicize the risks).
Or what if we push on neuroimaging alone hard enough so that when neuron simulation technology advances far enough to do brain emulations, high-fidelity brain scans will already be readily available and people won’t be tempted to use low-fidelity scans.
How hard have FHI/SIAI people thought about these issues? (Edit: Not a rhetorical question, it’s hard to tell from the outside.)
What about pushing on neuroscience and neuroimaging hard enough so that when there is enough computing power to do brain-inspired AI or low-fidelity emulations,
I would think that brain-inspired AI would use less hardware (taking advantage of differences between the brain and the digital/serial computer environment, along with our software and machine learning knowledge.
the technology for high-fidelity emulations will already be available, so people will have little reason to do brain-inspired AI or low-fidelity emulations (especially if we heavily publicize the risks).
Different relative weightings of imaginging, comp neurosci, and hardware would seem to give different probability distributions over brain-inspired AI, low-fi WBE, and hi-fi WBE, but I don’t see a likely track that goes in the direction of “probably WBE” without a huge (non-competitive) willingness to hold back on the part of future developers.
Or what if we push on neuroimaging alone hard enough so that when neuron simulation technology advances far enough to do brain emulations, high-fidelity brain scans will already be readily available and people won’t be tempted to use low-fidelity scans.
Of the three, neuroimaging seems most attractive to push (to me, Robin might say it’s the worst because of more abrupt/unequal transitions), but that doesn’t mean one should push any of them.
How hard have FHI/SIAI people thought about these issues? (Edit: Not a rhetorical question, it’s hard to tell from the outside.)
I would think that brain-inspired AI would use less hardware (taking advantage of differences between the brain and the digital/serial computer environment, along with our software and machine learning knowledge.
Good point, but brain-inspired AI may not be feasible (within a relevant time frame), because simulating a bunch of neurons may not get you to human-level general intelligence without either detailed information from a brain scan or an impractically huge amount of trial and error. It seems to me that P(unfriendly de novo AI is feasible | FAI is feasible) is near 1, whereas P(neuromorphic AI is feasible | hi-fi WBE is feasible) is maybe 0.5. Has this been considered?
Of the three, neuroimaging seems most attractive to push (to me, Robin would say it’s the worst because of more abrupt/unequal transitions), but that doesn’t mean one should push any of them.
Why not? SIAI is already pushing on decision theory (e.g., by supporting research associates who mainly work on decision theory). What’s the rationale for pushing decision theory but not neuroimaging?
I guess both of us think abrupt/unequal transitions are better than Robin’s Malthusian scenario, but I’m not sure why pushing neuroimaging will tend to lead to more abrupt/unequal transitions. I’m curious what the reasoning is.
P(neuromorphic AI is feasible | hi-fi WBE is feasible) Has this been considered?
Yes. You left out lo-fi WBE: insane/inhuman brainlike structures, generic humans, recovered brain-damaged minds, artificial infants, etc. Those paths would lose the chance at using humans with pre-selected, tested, and trained skills and motivations as WBE templates (who could be allowed relatively free rein in an institutional framework of mutual regulation more easily).
Why not? SIAI is already pushing on decision theory (e.g., by supporting research associates who mainly work on decision theory). What’s the rationale for pushing decision theory but not neuroimaging?
As I understand it the thought is that an AI with a problematic decision theory could still work, while an AI that could be trusted with high relative power ought to also have a correct (by our idealized standards, at least) decision theory. Eliezer thinks that, as problems relevant to FAI go, it has among the best ratios of positive effect on FAI vs boost to harmful AI. It is also a problem that can be used to signal technical chops, the possibility of progress, and for potential FAI researchers to practice on.
I guess both of us think abrupt/unequal transitions are better than Robin’s Malthusian scenario, but I’m not sure why pushing neuroimaging will tend to lead to more abrupt/unequal transitions.
Well, there are conflicting effects for abruptness and different kinds of inequality. If neuroimaging is solid, with many scanned brains, then when the computational neuroscience is solved one can use existing data rather than embarking on a large industrial brain-slicing and analysis project, during which time players could foresee the future and negotiate. So more room for a sudden ramp-up, or for one group or country getting far ahead. On the other hand, a neuroimaging bottleneck could mean fewer available WBE templates, and so fewer getting to participate in the early population explosion.
Here’s Robin’s post on the subject, which leaves his views more ambiguous:
Cell modeling – This sort of progress may be more random and harder to predict – a sudden burst of insight is more likely to create an unexpected and sudden em transition. This could induce large disruptive inequality in economic and military power,
Brain scanning – As this is also a relatively gradually advancing tech, it should also make for a more gradual predictable transition. But since it is now a rather small industry, surprise investments could make for more development surprise. Also, since the use of this tech is very lumpy, we may get billions, even trillions, of copies of the first scanned human.
You left out lo-fi WBE: insane/inhuman brainlike structures, generic humans, recovered brain-damaged minds, artificial infants, etc.
There seems a reasonable chance that none of these will FOOM into a negative Singularity before we get hi-fi WBE (e.g., if lo-fi WBE are not smart/sane enough hide their insanity from human overseers and quickly improve themselves or build powerful AGI), especially if we push the right techs so that lo-fi WBE and hi-fi WBE arrive at nearly the same time.
As I understand it: An AI with a problematic decision theory could still work, while an AI that could be trusted with high relative power ought to also have a correct (by our idealized standards, at least) decision theory. Eliezer thinks that, as problems relevant to FAI go, it has among the best ratios of positive effect on FAI vs boost to harmful AI. It is also a problem that can be used to signal technical chops, the possibility of progress, and for potential FAI researchers to practice on.
This argument can’t be right and complete, since it makes no reference at all to WBE, which has to be an important strategic consideration. You seem to be answering the question “If we had to push for FAI directly, how should we do it?” which is not what I asked.
especially if we push the right techs so that lo-fi WBE and hi-fi WBE arrive at nearly the same time.
This seems to me likely to be very hard, without something like a singleton or a project with a massive lead over its competitors that can take its time and is willing to despite the strangeness and difficulty of the problem, competitive pressures, etc.
A boost to neuroimaging goes into the public tech base, accelerating all WBE projects without any special advantage to the safety-oriented. The thought with decision theory is that the combination of its direct effects, and its role in bringing talented people to work on AI safety, will be much more targeted.
If you were convinced that the growth of the AI risk research community, and a closed FAI research team, were of near-zero value, and that decision theory of the sort people have published is likely to be a major factor for building AGI, the argument would not go through. But I would still rather build fungible resources and analytic capacities in that situation than push neuroimaging forward, given my current state of knowledge.
A boost to neuroimaging goes into the public tech base, accelerating all WBE projects without any special advantage to the safety-oriented.
I don’t understand why you say that. Wouldn’t safety-oriented WBE projects have greater requirements for neuroimaging? As I mentioned before, pushing neuroimaging now reduces the likelihood that by the time cell modeling and computing hardware let us do brain-like simulations, neuroimaging isn’t ready for hi-fi scanning so the only projects that can proceed will be lo-fi simulations.
The thought with decision theory is that the combination of its direct effects, and its role in bringing talented people to work on AI safety, will be much more targeted.
It may well be highly targeted, but still a bad idea. For example, suppose pushing decision theory raises the probability of FAI to 10x (compared to not pushing decision theory), and the probability of UFAI to 1.1x, but the base probability of FAI is too small for pushing decision theory to be a net benefit. Conversely, pushing neuroimaging may help safety-oriented WBE projects only slightly more than non-safety-oriented, but still worth doing.
But I would still rather build fungible resources and analytic capacities in that situation than push neuroimaging forward, given my current state of knowledge.
I certainly agree with that, but I don’t understand why SIAI isn’t demanding a similar level of analysis before pushing decision theory.
I don’t understand why you say that. Wouldn’t safety-oriented WBE projects have greater requirements for neuroimaging? As I mentioned before, pushing neuroimaging now reduces the likelihood that by the time cell modeling and computing hardware let us do brain-like simulations, neuroimaging isn’t ready for hi-fi scanning so the only projects that can proceed will be lo-fi simulations.
In the race to first AI/WBE, developing a technology privately gives the developer a speed advantage, ceteris paribus. The demand for hi-fi WBE rather than lo-fi WBE or brain-inspired AI is a disadvantage, which could be somewhat reduced with varying technological ensembles.
For example, suppose pushing decision theory raises the probability of FAI to 10x (compared to not pushing decision theory), and the probability of UFAI to 1.1x, but the base probability of FAI is too small for pushing decision theory to be a net benefit.
As I said earlier, if you think there is ~0 chance of an FAI research program leading to safe AI, and that decision theory of the sort folk have been working on plays a central role in AI (a 10% bonus would be pretty central), you would come to different conclusions re the tradeoffs on decision theory. Using the Socratic method to reconstruct standard WBE analysis, which I think we are both already familiar with, is a red herring.
I certainly agree with that, but I don’t understand why SIAI isn’t demanding a similar level of analysis before pushing decision theory.
Most have seemed to think that decision theory is a very small piece of the AGI picture. I suggest further hashing out your reasons for your estimate with the other decision theory folk in the research group and Eliezer.
Using the Socratic method to reconstruct standard WBE analysis, which I think we are both already familiar with, is a red herring.
Is the standard WBE analysis written up anywhere? By that phrase do you mean to include the “number of person-months” of work by FHI/SIAI that you mentioned earlier? I really am uncertain how far FHI/SIAI has pushed the analysis in these areas, and my questions were meant to be my attempt to figure that out. But it does seem like most of our disagreement is over decision theory rather than WBE, so let’s move the focus there.
Most have seemed to think that decision theory is a very small piece of the AGI picture.
I also think that’s most likely the case, but there’s a significant chance that it isn’t. I have not heard a strong argument why decision theory must be a very small piece of the AGI picture (and I did bring up this question on the decision theory mailing list), and in my state of ignorance it doesn’t seem crazy to think that maybe with the right decision theory and just a few other key pieces of technology, AGI would be possible.
As for thinking there is ~0 chance of an FAI research program leading to safe AI, my reasoning is that with FAI we’re dealing with seemingly impossible problems like ethics and consciousness, as well as numerous other philosophical problems that aren’t quite thousand-years old, but still look quite hard. What are the chances all these problems get solved in a few decades, barring IA and WBE? If we do solve them, we still have to integrate the solutions into an AGI design, verify its correctness, avoid Friendliness-impacting implementation bugs, and do all of that before other AGI projects take off.
It’s the social consequences that I’m most unsure about. It seems like if SIAI can keep “ownership” over the decision theory ideas and use it to preach AI risk, then that would be beneficial, but it could also be the case that the ideas take on a life of their own and we just end up having more people go into decision theory because they see it as a fruitful place to get interesting technical results.
Most have seemed to think that decision theory is a very small piece of the AGI picture.
I also think that’s most likely the case, but there’s a significant chance that it isn’t. I have not heard a strong argument why decision theory must be a very small piece of the AGI picture (and I did bring up this question on the decision theory mailing list), and in my state of ignorance it doesn’t seem crazy to think that maybe with the right decision theory and just a few other key pieces of technology, AGI would be possible.
On one hand, machine intelligence is all about making decisions in the face of uncertainty—so from this perspective, decision theory is central.
On the other hand, the basics of decision theory do not look that complicated—you just maximise expected utility. The problems seem to be mostly down to exactly how to do that efficiently.
The idea that safe machine intelligence will be assisted by modifications to decision theory to deal with “esoteric” corner cases seems to be mostly down to Eliezer Yudkowsky. I think it is a curious idea—but I am very happy that it isn’t an idea that I am faced with promoting.
On the other hand, the basics of decision theory do not look that complicated—you just maximise expected utility. The problems seem to be mostly down to exactly how to do that efficiently.
Isn’t AIXI a counter-example to that? We could give it unlimited computing power, and it would still screw up badly, in large part due to a broken decision theory, right?
From the perspective of ordinary development these don’t look like urgent issues—we can work on them once we have smarter minds. We need not fear not solving them too much—since if we can’t solve these problems our machines won’t work and nobody will buy them. It would take security considerations to prioritise these problems at this stage.
Was there something written up on this work? If not, I think it’d be worth spending a couple of days to write up a report or blog post so others who want to think about these problems don’t have to start from scratch.
Of the three, neuroimaging seems most attractive to push (to me, Robin might say it’s the worst because of more abrupt/unequal transitions), but that doesn’t mean one should push any of them.
It looks to me as though Robin would prefer computing power to mature last. Neuroimaging research now could help bring that about.
Ems seem quite likely to be safer than AGIs, since they start out sharing values with humans. They also decrease the likelihood of a singleton.
Uploads in particular mean that current humans can run on digital substrate, thereby ameliorating one of the principle causes of power imbalance between AGIs and humans.
Safer for who? I am not particularly convinced that a whole-brain emulation wouldn’t still be a human being, even if under alience circumstances to those of us alive today.
Fair enough. But then, I am of the opinion that so long as the cultural/psychological inheritor of humanity can itself be reliably deemed “human”, I’m not much concerned about what happens to meatspace humanity—at least, as compared to other forms of concerns. Would it suck for me to be converted to computronium by our evil WBE overlords? Sure. But at least those overlords would be human.
I’m not much concerned about what happens to meatspace humanity—at least, as compared to other forms of concerns. Would it suck for me to be converted to computronium by our evil WBE overlords? Sure. But at least those overlords would be human.
He may be a murderous despot, but he’s your murderous despot, eh?
Hasn’t Eliezer argued at length against ems being safer than AGIs? You should probably look up what he’s already written.
Thinking that high-fidelity WBE, magically dropped in our laps, would be a big gain is quite different from thinking that pushing WBE development will make us safer. Many people who have considered these questions buy the first claim, but not the second, since the neuroscience needed for WBE can enable AGI first (“airplanes before ornithopters,” etc).
Eliezer has argued that:
1) High-fidelity emulations of specific people give better odds of avoiding existential risk than a distribution over “other AI, Friendly or not.”
2) If you push forward the enabling neuroscience and neuroimaging for brain emulation you’re more likely to get brain-inspired AI or low-fidelity emulations first, which are unlikely to be safe, and a lot worse than high-fidelity emulations or Friendly AI.
3) Pushing forward the enabling technologies of WBE, in accelerating timelines, leaves less time for safety efforts to grow and work before AI, or for better information-gathering on which path to push.
What about pushing on neuroscience and neuroimaging hard enough so that when there is enough computing power to do brain-inspired AI or low-fidelity emulations, the technology for high-fidelity emulations will already be available, so people will have little reason to do brain-inspired AI or low-fidelity emulations (especially if we heavily publicize the risks).
Or what if we push on neuroimaging alone hard enough so that when neuron simulation technology advances far enough to do brain emulations, high-fidelity brain scans will already be readily available and people won’t be tempted to use low-fidelity scans.
How hard have FHI/SIAI people thought about these issues? (Edit: Not a rhetorical question, it’s hard to tell from the outside.)
I would think that brain-inspired AI would use less hardware (taking advantage of differences between the brain and the digital/serial computer environment, along with our software and machine learning knowledge.
Different relative weightings of imaginging, comp neurosci, and hardware would seem to give different probability distributions over brain-inspired AI, low-fi WBE, and hi-fi WBE, but I don’t see a likely track that goes in the direction of “probably WBE” without a huge (non-competitive) willingness to hold back on the part of future developers.
Of the three, neuroimaging seems most attractive to push (to me, Robin might say it’s the worst because of more abrupt/unequal transitions), but that doesn’t mean one should push any of them.
A number of person-months, but not person-years.
Good point, but brain-inspired AI may not be feasible (within a relevant time frame), because simulating a bunch of neurons may not get you to human-level general intelligence without either detailed information from a brain scan or an impractically huge amount of trial and error. It seems to me that P(unfriendly de novo AI is feasible | FAI is feasible) is near 1, whereas P(neuromorphic AI is feasible | hi-fi WBE is feasible) is maybe 0.5. Has this been considered?
Why not? SIAI is already pushing on decision theory (e.g., by supporting research associates who mainly work on decision theory). What’s the rationale for pushing decision theory but not neuroimaging?
I guess both of us think abrupt/unequal transitions are better than Robin’s Malthusian scenario, but I’m not sure why pushing neuroimaging will tend to lead to more abrupt/unequal transitions. I’m curious what the reasoning is.
Yes. You left out lo-fi WBE: insane/inhuman brainlike structures, generic humans, recovered brain-damaged minds, artificial infants, etc. Those paths would lose the chance at using humans with pre-selected, tested, and trained skills and motivations as WBE templates (who could be allowed relatively free rein in an institutional framework of mutual regulation more easily).
As I understand it the thought is that an AI with a problematic decision theory could still work, while an AI that could be trusted with high relative power ought to also have a correct (by our idealized standards, at least) decision theory. Eliezer thinks that, as problems relevant to FAI go, it has among the best ratios of positive effect on FAI vs boost to harmful AI. It is also a problem that can be used to signal technical chops, the possibility of progress, and for potential FAI researchers to practice on.
Well, there are conflicting effects for abruptness and different kinds of inequality. If neuroimaging is solid, with many scanned brains, then when the computational neuroscience is solved one can use existing data rather than embarking on a large industrial brain-slicing and analysis project, during which time players could foresee the future and negotiate. So more room for a sudden ramp-up, or for one group or country getting far ahead. On the other hand, a neuroimaging bottleneck could mean fewer available WBE templates, and so fewer getting to participate in the early population explosion.
Here’s Robin’s post on the subject, which leaves his views more ambiguous:
There seems a reasonable chance that none of these will FOOM into a negative Singularity before we get hi-fi WBE (e.g., if lo-fi WBE are not smart/sane enough hide their insanity from human overseers and quickly improve themselves or build powerful AGI), especially if we push the right techs so that lo-fi WBE and hi-fi WBE arrive at nearly the same time.
This argument can’t be right and complete, since it makes no reference at all to WBE, which has to be an important strategic consideration. You seem to be answering the question “If we had to push for FAI directly, how should we do it?” which is not what I asked.
This seems to me likely to be very hard, without something like a singleton or a project with a massive lead over its competitors that can take its time and is willing to despite the strangeness and difficulty of the problem, competitive pressures, etc.
A boost to neuroimaging goes into the public tech base, accelerating all WBE projects without any special advantage to the safety-oriented. The thought with decision theory is that the combination of its direct effects, and its role in bringing talented people to work on AI safety, will be much more targeted.
If you were convinced that the growth of the AI risk research community, and a closed FAI research team, were of near-zero value, and that decision theory of the sort people have published is likely to be a major factor for building AGI, the argument would not go through. But I would still rather build fungible resources and analytic capacities in that situation than push neuroimaging forward, given my current state of knowledge.
I don’t understand why you say that. Wouldn’t safety-oriented WBE projects have greater requirements for neuroimaging? As I mentioned before, pushing neuroimaging now reduces the likelihood that by the time cell modeling and computing hardware let us do brain-like simulations, neuroimaging isn’t ready for hi-fi scanning so the only projects that can proceed will be lo-fi simulations.
It may well be highly targeted, but still a bad idea. For example, suppose pushing decision theory raises the probability of FAI to 10x (compared to not pushing decision theory), and the probability of UFAI to 1.1x, but the base probability of FAI is too small for pushing decision theory to be a net benefit. Conversely, pushing neuroimaging may help safety-oriented WBE projects only slightly more than non-safety-oriented, but still worth doing.
I certainly agree with that, but I don’t understand why SIAI isn’t demanding a similar level of analysis before pushing decision theory.
In the race to first AI/WBE, developing a technology privately gives the developer a speed advantage, ceteris paribus. The demand for hi-fi WBE rather than lo-fi WBE or brain-inspired AI is a disadvantage, which could be somewhat reduced with varying technological ensembles.
As I said earlier, if you think there is ~0 chance of an FAI research program leading to safe AI, and that decision theory of the sort folk have been working on plays a central role in AI (a 10% bonus would be pretty central), you would come to different conclusions re the tradeoffs on decision theory. Using the Socratic method to reconstruct standard WBE analysis, which I think we are both already familiar with, is a red herring.
Most have seemed to think that decision theory is a very small piece of the AGI picture. I suggest further hashing out your reasons for your estimate with the other decision theory folk in the research group and Eliezer.
Is the standard WBE analysis written up anywhere? By that phrase do you mean to include the “number of person-months” of work by FHI/SIAI that you mentioned earlier? I really am uncertain how far FHI/SIAI has pushed the analysis in these areas, and my questions were meant to be my attempt to figure that out. But it does seem like most of our disagreement is over decision theory rather than WBE, so let’s move the focus there.
I also think that’s most likely the case, but there’s a significant chance that it isn’t. I have not heard a strong argument why decision theory must be a very small piece of the AGI picture (and I did bring up this question on the decision theory mailing list), and in my state of ignorance it doesn’t seem crazy to think that maybe with the right decision theory and just a few other key pieces of technology, AGI would be possible.
As for thinking there is ~0 chance of an FAI research program leading to safe AI, my reasoning is that with FAI we’re dealing with seemingly impossible problems like ethics and consciousness, as well as numerous other philosophical problems that aren’t quite thousand-years old, but still look quite hard. What are the chances all these problems get solved in a few decades, barring IA and WBE? If we do solve them, we still have to integrate the solutions into an AGI design, verify its correctness, avoid Friendliness-impacting implementation bugs, and do all of that before other AGI projects take off.
It’s the social consequences that I’m most unsure about. It seems like if SIAI can keep “ownership” over the decision theory ideas and use it to preach AI risk, then that would be beneficial, but it could also be the case that the ideas take on a life of their own and we just end up having more people go into decision theory because they see it as a fruitful place to get interesting technical results.
On one hand, machine intelligence is all about making decisions in the face of uncertainty—so from this perspective, decision theory is central.
On the other hand, the basics of decision theory do not look that complicated—you just maximise expected utility. The problems seem to be mostly down to exactly how to do that efficiently.
The idea that safe machine intelligence will be assisted by modifications to decision theory to deal with “esoteric” corner cases seems to be mostly down to Eliezer Yudkowsky. I think it is a curious idea—but I am very happy that it isn’t an idea that I am faced with promoting.
Isn’t AIXI a counter-example to that? We could give it unlimited computing power, and it would still screw up badly, in large part due to a broken decision theory, right?
Kinda, yes. Any problem is a decision theory problem—in a sense. However, we can get a long way without the wirehead problem, utility counterfeiting, and machines mining their own brains causing problems.
From the perspective of ordinary development these don’t look like urgent issues—we can work on them once we have smarter minds. We need not fear not solving them too much—since if we can’t solve these problems our machines won’t work and nobody will buy them. It would take security considerations to prioritise these problems at this stage.
Was there something written up on this work? If not, I think it’d be worth spending a couple of days to write up a report or blog post so others who want to think about these problems don’t have to start from scratch.
It looks to me as though Robin would prefer computing power to mature last. Neuroimaging research now could help bring that about.
http://www.overcomingbias.com/2009/11/bad-emulation-advance.html
Ems seem quite likely to be safer than AGIs, since they start out sharing values with humans. They also decrease the likelihood of a singleton.
Uploads in particular mean that current humans can run on digital substrate, thereby ameliorating one of the principle causes of power imbalance between AGIs and humans.
One thing that humans commonly value is exerting power/influence over other humans.
Safer for who? I am not particularly convinced that a whole-brain emulation wouldn’t still be a human being, even if under alience circumstances to those of us alive today.
Safer for everyone else. Humans aren’t Friendly.
Fair enough. But then, I am of the opinion that so long as the cultural/psychological inheritor of humanity can itself be reliably deemed “human”, I’m not much concerned about what happens to meatspace humanity—at least, as compared to other forms of concerns. Would it suck for me to be converted to computronium by our evil WBE overlords? Sure. But at least those overlords would be human.
He may be a murderous despot, but he’s your murderous despot, eh?
I’m a sentimental guy.