P(neuromorphic AI is feasible | hi-fi WBE is feasible) Has this been considered?
Yes. You left out lo-fi WBE: insane/inhuman brainlike structures, generic humans, recovered brain-damaged minds, artificial infants, etc. Those paths would lose the chance at using humans with pre-selected, tested, and trained skills and motivations as WBE templates (who could be allowed relatively free rein in an institutional framework of mutual regulation more easily).
Why not? SIAI is already pushing on decision theory (e.g., by supporting research associates who mainly work on decision theory). What’s the rationale for pushing decision theory but not neuroimaging?
As I understand it the thought is that an AI with a problematic decision theory could still work, while an AI that could be trusted with high relative power ought to also have a correct (by our idealized standards, at least) decision theory. Eliezer thinks that, as problems relevant to FAI go, it has among the best ratios of positive effect on FAI vs boost to harmful AI. It is also a problem that can be used to signal technical chops, the possibility of progress, and for potential FAI researchers to practice on.
I guess both of us think abrupt/unequal transitions are better than Robin’s Malthusian scenario, but I’m not sure why pushing neuroimaging will tend to lead to more abrupt/unequal transitions.
Well, there are conflicting effects for abruptness and different kinds of inequality. If neuroimaging is solid, with many scanned brains, then when the computational neuroscience is solved one can use existing data rather than embarking on a large industrial brain-slicing and analysis project, during which time players could foresee the future and negotiate. So more room for a sudden ramp-up, or for one group or country getting far ahead. On the other hand, a neuroimaging bottleneck could mean fewer available WBE templates, and so fewer getting to participate in the early population explosion.
Here’s Robin’s post on the subject, which leaves his views more ambiguous:
Cell modeling – This sort of progress may be more random and harder to predict – a sudden burst of insight is more likely to create an unexpected and sudden em transition. This could induce large disruptive inequality in economic and military power,
Brain scanning – As this is also a relatively gradually advancing tech, it should also make for a more gradual predictable transition. But since it is now a rather small industry, surprise investments could make for more development surprise. Also, since the use of this tech is very lumpy, we may get billions, even trillions, of copies of the first scanned human.
You left out lo-fi WBE: insane/inhuman brainlike structures, generic humans, recovered brain-damaged minds, artificial infants, etc.
There seems a reasonable chance that none of these will FOOM into a negative Singularity before we get hi-fi WBE (e.g., if lo-fi WBE are not smart/sane enough hide their insanity from human overseers and quickly improve themselves or build powerful AGI), especially if we push the right techs so that lo-fi WBE and hi-fi WBE arrive at nearly the same time.
As I understand it: An AI with a problematic decision theory could still work, while an AI that could be trusted with high relative power ought to also have a correct (by our idealized standards, at least) decision theory. Eliezer thinks that, as problems relevant to FAI go, it has among the best ratios of positive effect on FAI vs boost to harmful AI. It is also a problem that can be used to signal technical chops, the possibility of progress, and for potential FAI researchers to practice on.
This argument can’t be right and complete, since it makes no reference at all to WBE, which has to be an important strategic consideration. You seem to be answering the question “If we had to push for FAI directly, how should we do it?” which is not what I asked.
especially if we push the right techs so that lo-fi WBE and hi-fi WBE arrive at nearly the same time.
This seems to me likely to be very hard, without something like a singleton or a project with a massive lead over its competitors that can take its time and is willing to despite the strangeness and difficulty of the problem, competitive pressures, etc.
A boost to neuroimaging goes into the public tech base, accelerating all WBE projects without any special advantage to the safety-oriented. The thought with decision theory is that the combination of its direct effects, and its role in bringing talented people to work on AI safety, will be much more targeted.
If you were convinced that the growth of the AI risk research community, and a closed FAI research team, were of near-zero value, and that decision theory of the sort people have published is likely to be a major factor for building AGI, the argument would not go through. But I would still rather build fungible resources and analytic capacities in that situation than push neuroimaging forward, given my current state of knowledge.
A boost to neuroimaging goes into the public tech base, accelerating all WBE projects without any special advantage to the safety-oriented.
I don’t understand why you say that. Wouldn’t safety-oriented WBE projects have greater requirements for neuroimaging? As I mentioned before, pushing neuroimaging now reduces the likelihood that by the time cell modeling and computing hardware let us do brain-like simulations, neuroimaging isn’t ready for hi-fi scanning so the only projects that can proceed will be lo-fi simulations.
The thought with decision theory is that the combination of its direct effects, and its role in bringing talented people to work on AI safety, will be much more targeted.
It may well be highly targeted, but still a bad idea. For example, suppose pushing decision theory raises the probability of FAI to 10x (compared to not pushing decision theory), and the probability of UFAI to 1.1x, but the base probability of FAI is too small for pushing decision theory to be a net benefit. Conversely, pushing neuroimaging may help safety-oriented WBE projects only slightly more than non-safety-oriented, but still worth doing.
But I would still rather build fungible resources and analytic capacities in that situation than push neuroimaging forward, given my current state of knowledge.
I certainly agree with that, but I don’t understand why SIAI isn’t demanding a similar level of analysis before pushing decision theory.
I don’t understand why you say that. Wouldn’t safety-oriented WBE projects have greater requirements for neuroimaging? As I mentioned before, pushing neuroimaging now reduces the likelihood that by the time cell modeling and computing hardware let us do brain-like simulations, neuroimaging isn’t ready for hi-fi scanning so the only projects that can proceed will be lo-fi simulations.
In the race to first AI/WBE, developing a technology privately gives the developer a speed advantage, ceteris paribus. The demand for hi-fi WBE rather than lo-fi WBE or brain-inspired AI is a disadvantage, which could be somewhat reduced with varying technological ensembles.
For example, suppose pushing decision theory raises the probability of FAI to 10x (compared to not pushing decision theory), and the probability of UFAI to 1.1x, but the base probability of FAI is too small for pushing decision theory to be a net benefit.
As I said earlier, if you think there is ~0 chance of an FAI research program leading to safe AI, and that decision theory of the sort folk have been working on plays a central role in AI (a 10% bonus would be pretty central), you would come to different conclusions re the tradeoffs on decision theory. Using the Socratic method to reconstruct standard WBE analysis, which I think we are both already familiar with, is a red herring.
I certainly agree with that, but I don’t understand why SIAI isn’t demanding a similar level of analysis before pushing decision theory.
Most have seemed to think that decision theory is a very small piece of the AGI picture. I suggest further hashing out your reasons for your estimate with the other decision theory folk in the research group and Eliezer.
Using the Socratic method to reconstruct standard WBE analysis, which I think we are both already familiar with, is a red herring.
Is the standard WBE analysis written up anywhere? By that phrase do you mean to include the “number of person-months” of work by FHI/SIAI that you mentioned earlier? I really am uncertain how far FHI/SIAI has pushed the analysis in these areas, and my questions were meant to be my attempt to figure that out. But it does seem like most of our disagreement is over decision theory rather than WBE, so let’s move the focus there.
Most have seemed to think that decision theory is a very small piece of the AGI picture.
I also think that’s most likely the case, but there’s a significant chance that it isn’t. I have not heard a strong argument why decision theory must be a very small piece of the AGI picture (and I did bring up this question on the decision theory mailing list), and in my state of ignorance it doesn’t seem crazy to think that maybe with the right decision theory and just a few other key pieces of technology, AGI would be possible.
As for thinking there is ~0 chance of an FAI research program leading to safe AI, my reasoning is that with FAI we’re dealing with seemingly impossible problems like ethics and consciousness, as well as numerous other philosophical problems that aren’t quite thousand-years old, but still look quite hard. What are the chances all these problems get solved in a few decades, barring IA and WBE? If we do solve them, we still have to integrate the solutions into an AGI design, verify its correctness, avoid Friendliness-impacting implementation bugs, and do all of that before other AGI projects take off.
It’s the social consequences that I’m most unsure about. It seems like if SIAI can keep “ownership” over the decision theory ideas and use it to preach AI risk, then that would be beneficial, but it could also be the case that the ideas take on a life of their own and we just end up having more people go into decision theory because they see it as a fruitful place to get interesting technical results.
Most have seemed to think that decision theory is a very small piece of the AGI picture.
I also think that’s most likely the case, but there’s a significant chance that it isn’t. I have not heard a strong argument why decision theory must be a very small piece of the AGI picture (and I did bring up this question on the decision theory mailing list), and in my state of ignorance it doesn’t seem crazy to think that maybe with the right decision theory and just a few other key pieces of technology, AGI would be possible.
On one hand, machine intelligence is all about making decisions in the face of uncertainty—so from this perspective, decision theory is central.
On the other hand, the basics of decision theory do not look that complicated—you just maximise expected utility. The problems seem to be mostly down to exactly how to do that efficiently.
The idea that safe machine intelligence will be assisted by modifications to decision theory to deal with “esoteric” corner cases seems to be mostly down to Eliezer Yudkowsky. I think it is a curious idea—but I am very happy that it isn’t an idea that I am faced with promoting.
On the other hand, the basics of decision theory do not look that complicated—you just maximise expected utility. The problems seem to be mostly down to exactly how to do that efficiently.
Isn’t AIXI a counter-example to that? We could give it unlimited computing power, and it would still screw up badly, in large part due to a broken decision theory, right?
From the perspective of ordinary development these don’t look like urgent issues—we can work on them once we have smarter minds. We need not fear not solving them too much—since if we can’t solve these problems our machines won’t work and nobody will buy them. It would take security considerations to prioritise these problems at this stage.
Yes. You left out lo-fi WBE: insane/inhuman brainlike structures, generic humans, recovered brain-damaged minds, artificial infants, etc. Those paths would lose the chance at using humans with pre-selected, tested, and trained skills and motivations as WBE templates (who could be allowed relatively free rein in an institutional framework of mutual regulation more easily).
As I understand it the thought is that an AI with a problematic decision theory could still work, while an AI that could be trusted with high relative power ought to also have a correct (by our idealized standards, at least) decision theory. Eliezer thinks that, as problems relevant to FAI go, it has among the best ratios of positive effect on FAI vs boost to harmful AI. It is also a problem that can be used to signal technical chops, the possibility of progress, and for potential FAI researchers to practice on.
Well, there are conflicting effects for abruptness and different kinds of inequality. If neuroimaging is solid, with many scanned brains, then when the computational neuroscience is solved one can use existing data rather than embarking on a large industrial brain-slicing and analysis project, during which time players could foresee the future and negotiate. So more room for a sudden ramp-up, or for one group or country getting far ahead. On the other hand, a neuroimaging bottleneck could mean fewer available WBE templates, and so fewer getting to participate in the early population explosion.
Here’s Robin’s post on the subject, which leaves his views more ambiguous:
There seems a reasonable chance that none of these will FOOM into a negative Singularity before we get hi-fi WBE (e.g., if lo-fi WBE are not smart/sane enough hide their insanity from human overseers and quickly improve themselves or build powerful AGI), especially if we push the right techs so that lo-fi WBE and hi-fi WBE arrive at nearly the same time.
This argument can’t be right and complete, since it makes no reference at all to WBE, which has to be an important strategic consideration. You seem to be answering the question “If we had to push for FAI directly, how should we do it?” which is not what I asked.
This seems to me likely to be very hard, without something like a singleton or a project with a massive lead over its competitors that can take its time and is willing to despite the strangeness and difficulty of the problem, competitive pressures, etc.
A boost to neuroimaging goes into the public tech base, accelerating all WBE projects without any special advantage to the safety-oriented. The thought with decision theory is that the combination of its direct effects, and its role in bringing talented people to work on AI safety, will be much more targeted.
If you were convinced that the growth of the AI risk research community, and a closed FAI research team, were of near-zero value, and that decision theory of the sort people have published is likely to be a major factor for building AGI, the argument would not go through. But I would still rather build fungible resources and analytic capacities in that situation than push neuroimaging forward, given my current state of knowledge.
I don’t understand why you say that. Wouldn’t safety-oriented WBE projects have greater requirements for neuroimaging? As I mentioned before, pushing neuroimaging now reduces the likelihood that by the time cell modeling and computing hardware let us do brain-like simulations, neuroimaging isn’t ready for hi-fi scanning so the only projects that can proceed will be lo-fi simulations.
It may well be highly targeted, but still a bad idea. For example, suppose pushing decision theory raises the probability of FAI to 10x (compared to not pushing decision theory), and the probability of UFAI to 1.1x, but the base probability of FAI is too small for pushing decision theory to be a net benefit. Conversely, pushing neuroimaging may help safety-oriented WBE projects only slightly more than non-safety-oriented, but still worth doing.
I certainly agree with that, but I don’t understand why SIAI isn’t demanding a similar level of analysis before pushing decision theory.
In the race to first AI/WBE, developing a technology privately gives the developer a speed advantage, ceteris paribus. The demand for hi-fi WBE rather than lo-fi WBE or brain-inspired AI is a disadvantage, which could be somewhat reduced with varying technological ensembles.
As I said earlier, if you think there is ~0 chance of an FAI research program leading to safe AI, and that decision theory of the sort folk have been working on plays a central role in AI (a 10% bonus would be pretty central), you would come to different conclusions re the tradeoffs on decision theory. Using the Socratic method to reconstruct standard WBE analysis, which I think we are both already familiar with, is a red herring.
Most have seemed to think that decision theory is a very small piece of the AGI picture. I suggest further hashing out your reasons for your estimate with the other decision theory folk in the research group and Eliezer.
Is the standard WBE analysis written up anywhere? By that phrase do you mean to include the “number of person-months” of work by FHI/SIAI that you mentioned earlier? I really am uncertain how far FHI/SIAI has pushed the analysis in these areas, and my questions were meant to be my attempt to figure that out. But it does seem like most of our disagreement is over decision theory rather than WBE, so let’s move the focus there.
I also think that’s most likely the case, but there’s a significant chance that it isn’t. I have not heard a strong argument why decision theory must be a very small piece of the AGI picture (and I did bring up this question on the decision theory mailing list), and in my state of ignorance it doesn’t seem crazy to think that maybe with the right decision theory and just a few other key pieces of technology, AGI would be possible.
As for thinking there is ~0 chance of an FAI research program leading to safe AI, my reasoning is that with FAI we’re dealing with seemingly impossible problems like ethics and consciousness, as well as numerous other philosophical problems that aren’t quite thousand-years old, but still look quite hard. What are the chances all these problems get solved in a few decades, barring IA and WBE? If we do solve them, we still have to integrate the solutions into an AGI design, verify its correctness, avoid Friendliness-impacting implementation bugs, and do all of that before other AGI projects take off.
It’s the social consequences that I’m most unsure about. It seems like if SIAI can keep “ownership” over the decision theory ideas and use it to preach AI risk, then that would be beneficial, but it could also be the case that the ideas take on a life of their own and we just end up having more people go into decision theory because they see it as a fruitful place to get interesting technical results.
On one hand, machine intelligence is all about making decisions in the face of uncertainty—so from this perspective, decision theory is central.
On the other hand, the basics of decision theory do not look that complicated—you just maximise expected utility. The problems seem to be mostly down to exactly how to do that efficiently.
The idea that safe machine intelligence will be assisted by modifications to decision theory to deal with “esoteric” corner cases seems to be mostly down to Eliezer Yudkowsky. I think it is a curious idea—but I am very happy that it isn’t an idea that I am faced with promoting.
Isn’t AIXI a counter-example to that? We could give it unlimited computing power, and it would still screw up badly, in large part due to a broken decision theory, right?
Kinda, yes. Any problem is a decision theory problem—in a sense. However, we can get a long way without the wirehead problem, utility counterfeiting, and machines mining their own brains causing problems.
From the perspective of ordinary development these don’t look like urgent issues—we can work on them once we have smarter minds. We need not fear not solving them too much—since if we can’t solve these problems our machines won’t work and nobody will buy them. It would take security considerations to prioritise these problems at this stage.