A human is very massively sub-mankind level intelligent, and some rough approximation running at sub-realtime speeds with several orders of magnitude higher daily sustenance cost even more so.
No disagreement there.
Granted, it’s a great plot device once you give it superpowers, and so there have been many high profile movies concerning such scenarios, and you see worlds destroyed by AI on the big screen. And your internal probability evaluator—evolved before CGI—uses the frequency of scenarios you seen with your own eyes.
Then why you’re discussing sudden super-intelligence? The faster cell simulation technologies advance, the weaker is the hardware they’ll run on.
Reversed stupidity isn’t intelligence.
Direct stupidity is not intelligence either. The a-priori likelihood of an arbitrary made up prediction being anywhere near correct is pretty damn low. If I tell you that 7 932 384 626 is some lottery number from yesterday, you may believe it, but once you see that it’s the digits of pi fairly close to the start, your credence should drop. A lot.
The faster cell simulation technologies advance, the weaker is the hardware they’ll run on.
If hardware growth strictly followed Moore’s Law and CPUs (or GPUs, etc.) were completely general-purpose, this would be true. But, if cell simulation became a dominant application for computing hardware, one could imagine instruction set extensions or even entire architecture changes designed around it. Obviously, it would also take some time for software to take advantage of hardware change.
Well, first it has to become dominant enough (for which it’d need to be common enough, for which it needs to be useful enough—used for what?), then the hardware specialization is not easy either, and on top of that specialized hardware locks the designs in (prevents easy modification and optimization). Especially if we’re speaking of specializing beyond how GPUs are specialized for parallel floating point computations.
I’m afraid you’re going to have to explain yourself better if you want me to respond… I confess I don’t see clearly how what you’re saying pertains to our argument.
The point is, cell simulation won’t yield this stupid AI movie plot threat that you guys are concerned about. This is because it doesn’t result in sudden superintelligence, but a very gradual transition.
And in so much as there’s some residual possibility that it could, this possibility is lessened by working on it earlier.
I’m puzzled why you focus on the AI movie plot threat when discussing any AI related technology, but my suspicion is that it’s because it is a movie plot threat.
edit: as for the “robust provably safe AI”, as a subset of “safe” an AI must be able to look at—say—an electronic circuit diagram (or an even lower level representation) and tell if said circuit diagram implements a tortured sentient being. You’d need neurobiology to merely define what’s bad. The problem is that “robust provably safe” is nebulous enough that you can’t link it to anything concrete.
The point is, cell simulation won’t yield this stupid AI movie plot threat that you guys are concerned about. This is because it doesn’t result in sudden superintelligence, but a very gradual transition.
You seem awfully confident. I agree that you’re likely right but I think it’s hard to know for sure and most people who speak on this issue are too confident (including you and both EY/RH in their AI foom debate).
And in so much as there’s some residual possibility that it could, this possibility is lessened by working on it earlier.
Just to clarify: so you mostly agree with the “Bad Emulation Advance” blog post?
It’s not clear to me that a gradual transition completely defeats the argument against neuromorphic AI. If neuromorphic AI is less predictable (to put things poetically, “harder to wield”) than AI constructed so that it provably satisfies certain properties, then you can imagine humanity wielding a bigger and bigger weapon that’s hard to control. How long do you think the world would last if everyone had a pistol that fired tactical nuclear weapons? What if the pistol has a one in six chance of firing in a random direction?
I’m puzzled why you focus on the AI movie plot threat when discussing any AI related technology...
Want to point to a specific case where I did that?
edit: as for the “robust provably safe AI”, as a subset of “safe” an AI must be able to look at—say—an electronic circuit diagram (or an even lower level representation) and tell if said circuit diagram implements a tortured sentient being. You’d need neurobiology to merely define what’s bad. The problem is that “robust provably safe” is nebulous enough that you can’t link it to anything concrete.
That’s an interesting point. I think it probably makes sense to think of “robust provably safe” as being a continuous parameter. You’ve got your module that determines what’s ethical and what isn’t and you’ve got your module that makes predictions and you’ve got your module that generates plans. The probability of your AI being “safe” is the product of the probabilities of all your modules being “safe”. If a neuromorphic AI self-modifies in a less predictable way that seems like a lose, keeping the ethics module constant.
You seem awfully confident. I agree that you’re likely right but I think it’s hard to know for sure and most people who speak on this issue are too confident (including you and both EY/RH in their AI foom debate).
There’s a false equivalence, similar to what’d happen if I were predicting “the lottery will not roll 12345134” and someone else predicting “the lottery will roll 12345134″. Predicting some sudden change in a growth curve along with the cause of such change, that’s a guess into a large space of possibilities; if such guess is equally unsupported with it’s negation, it’s extremely unlikely and negation is much more likely.
If neuromorphic AI is less predictable
That strikes me as a rather silly way to look at it. The future generations of biological humans are not predictable or controllable either.
If a neuromorphic AI self-modifies in a less predictable way that seems like a lose, keeping the ethics module constant.
The point is that you need bottom-up understanding of, for example, suffering, to be able to even begin working at an “ethics module” which recognizes suffering as bad. (We get away without conscious understanding of such only because we can feel it ourselves and thus implicitly embody a definition of such). On the road to that, you obviously have cell simulation and other neurobiology.
The broader picture is that with zero clue as to the technical process of actually building the “ethics module”, when you look at, say, openworm, and it doesn’t seem like it helps build an ethical module, that’s not representative in any way as to whenever it would or would not help, but only representative of it being a concrete and specific advance and the “ethics module” being too far off and nebulous.
There’s a false equivalence, similar to what’d happen if I were predicting “the lottery will not roll 12345134” and someone else predicting “the lottery will roll 12345134″. Predicting some sudden change in a growth curve along with the cause of such change, that’s a guess into a large space of possibilities; if such guess is equally unsupported with it’s negation, it’s extremely unlikely and negation is much more likely.
This sounds to me like an argument over priors; I’ll tap out at this point.
That strikes me as a rather silly way to look at it. The future generations of biological humans are not predictable or controllable either.
Well, do you trust humans with humanity’s future? I’m not sure I do.
The point is that you need bottom-up understanding of, for example, suffering, to be able to even begin working at an “ethics module” which recognizes suffering as bad. (We get away without conscious understanding of such only because we can feel it ourselves and thus implicitly embody a definition of such). On the road to that, you obviously have cell simulation and other neurobiology.
Well yeah and I could trivially “defeat” any argument of yours by declaring my prior for it to be very low. My priors for the future are broadly distributed because the world we are in would seem very weird to a hunter-gatherer, so I think it’s likely that the world of 6,000 years from now will seem very weird to us. Heck, World War II would probably sound pretty fantastic if you described it to Columbus.
Priors can’t go arbitrarily high before the sum over incompatible propositions becomes greater than 1.
If we were to copy your brain a trillion times over and ask it to give your “broadly distributed” priors for various mutually incompatible and very specific propositions, the result should sum to 1 (or less than 1 if its non exhaustive), which means that most propositions should receive very, very low priors. I strongly suspect that it wouldn’t be even remotely the case—you’ll be given a proposition, then you can’t be sure it’s wrong “because the world of future would look strange”, and so you give it some prior heavily biased towards 0.5 , and then over all the propositions, the summ will be very huge .
When you’re making very specific stuff up about what the world of 6000 years from now will look like, it’s necessarily quite unlikely that you’re right and quite likely that you’re wrong, precisely because that future would seem strange. That the future is unpredictable works against specific visions of the future.
Granted, it’s a great plot device once you give it superpowers, and so there have been many high profile movies concerning such scenarios, and you see worlds destroyed by AI on the big screen. And your internal probability evaluator—evolved before CGI—uses the frequency of scenarios you seen with your own eyes.
No disagreement there.
Reversed stupidity isn’t intelligence.
Then why you’re discussing sudden super-intelligence? The faster cell simulation technologies advance, the weaker is the hardware they’ll run on.
Direct stupidity is not intelligence either. The a-priori likelihood of an arbitrary made up prediction being anywhere near correct is pretty damn low. If I tell you that 7 932 384 626 is some lottery number from yesterday, you may believe it, but once you see that it’s the digits of pi fairly close to the start, your credence should drop. A lot.
If hardware growth strictly followed Moore’s Law and CPUs (or GPUs, etc.) were completely general-purpose, this would be true. But, if cell simulation became a dominant application for computing hardware, one could imagine instruction set extensions or even entire architecture changes designed around it. Obviously, it would also take some time for software to take advantage of hardware change.
Well, first it has to become dominant enough (for which it’d need to be common enough, for which it needs to be useful enough—used for what?), then the hardware specialization is not easy either, and on top of that specialized hardware locks the designs in (prevents easy modification and optimization). Especially if we’re speaking of specializing beyond how GPUs are specialized for parallel floating point computations.
I’m afraid you’re going to have to explain yourself better if you want me to respond… I confess I don’t see clearly how what you’re saying pertains to our argument.
The point is, cell simulation won’t yield this stupid AI movie plot threat that you guys are concerned about. This is because it doesn’t result in sudden superintelligence, but a very gradual transition.
And in so much as there’s some residual possibility that it could, this possibility is lessened by working on it earlier.
I’m puzzled why you focus on the AI movie plot threat when discussing any AI related technology, but my suspicion is that it’s because it is a movie plot threat.
edit: as for the “robust provably safe AI”, as a subset of “safe” an AI must be able to look at—say—an electronic circuit diagram (or an even lower level representation) and tell if said circuit diagram implements a tortured sentient being. You’d need neurobiology to merely define what’s bad. The problem is that “robust provably safe” is nebulous enough that you can’t link it to anything concrete.
You seem awfully confident. I agree that you’re likely right but I think it’s hard to know for sure and most people who speak on this issue are too confident (including you and both EY/RH in their AI foom debate).
Just to clarify: so you mostly agree with the “Bad Emulation Advance” blog post?
It’s not clear to me that a gradual transition completely defeats the argument against neuromorphic AI. If neuromorphic AI is less predictable (to put things poetically, “harder to wield”) than AI constructed so that it provably satisfies certain properties, then you can imagine humanity wielding a bigger and bigger weapon that’s hard to control. How long do you think the world would last if everyone had a pistol that fired tactical nuclear weapons? What if the pistol has a one in six chance of firing in a random direction?
Want to point to a specific case where I did that?
That’s an interesting point. I think it probably makes sense to think of “robust provably safe” as being a continuous parameter. You’ve got your module that determines what’s ethical and what isn’t and you’ve got your module that makes predictions and you’ve got your module that generates plans. The probability of your AI being “safe” is the product of the probabilities of all your modules being “safe”. If a neuromorphic AI self-modifies in a less predictable way that seems like a lose, keeping the ethics module constant.
There’s a false equivalence, similar to what’d happen if I were predicting “the lottery will not roll 12345134” and someone else predicting “the lottery will roll 12345134″. Predicting some sudden change in a growth curve along with the cause of such change, that’s a guess into a large space of possibilities; if such guess is equally unsupported with it’s negation, it’s extremely unlikely and negation is much more likely.
That strikes me as a rather silly way to look at it. The future generations of biological humans are not predictable or controllable either.
The point is that you need bottom-up understanding of, for example, suffering, to be able to even begin working at an “ethics module” which recognizes suffering as bad. (We get away without conscious understanding of such only because we can feel it ourselves and thus implicitly embody a definition of such). On the road to that, you obviously have cell simulation and other neurobiology.
The broader picture is that with zero clue as to the technical process of actually building the “ethics module”, when you look at, say, openworm, and it doesn’t seem like it helps build an ethical module, that’s not representative in any way as to whenever it would or would not help, but only representative of it being a concrete and specific advance and the “ethics module” being too far off and nebulous.
This sounds to me like an argument over priors; I’ll tap out at this point.
Well, do you trust humans with humanity’s future? I’m not sure I do.
Maybe.
If you just make stuff up, the argument will be about priors. Observe: there’s a teapot in the asteroid belt.
Well yeah and I could trivially “defeat” any argument of yours by declaring my prior for it to be very low. My priors for the future are broadly distributed because the world we are in would seem very weird to a hunter-gatherer, so I think it’s likely that the world of 6,000 years from now will seem very weird to us. Heck, World War II would probably sound pretty fantastic if you described it to Columbus.
I’ll let you have the last word :)
Priors can’t go arbitrarily high before the sum over incompatible propositions becomes greater than 1.
If we were to copy your brain a trillion times over and ask it to give your “broadly distributed” priors for various mutually incompatible and very specific propositions, the result should sum to 1 (or less than 1 if its non exhaustive), which means that most propositions should receive very, very low priors. I strongly suspect that it wouldn’t be even remotely the case—you’ll be given a proposition, then you can’t be sure it’s wrong “because the world of future would look strange”, and so you give it some prior heavily biased towards 0.5 , and then over all the propositions, the summ will be very huge .
When you’re making very specific stuff up about what the world of 6000 years from now will look like, it’s necessarily quite unlikely that you’re right and quite likely that you’re wrong, precisely because that future would seem strange. That the future is unpredictable works against specific visions of the future.
Are there alternatives..?
PM’s precise point is also raised in the Sequences.
Yes, and John’s point is that just because it’s possible to arrive at that conclusion by a wrong method doesn’t actually mean it’s wrong.