Just because it doesn’t do exactly what you want doesn’t mean it is going to fail in some utterly spectacular way.
You aren’t searching for solutions to a real world problem, you are searching for solutions to a model (ultimately, for solutions to systems of equations), and not only you have limited solution space, you don’t model anything irrelevant. Furthermore, the search space is not 2d and not 3d, and not even 100d, the volume increases really rapidly with size. The predictions of many systems are fundamentally limited by Lyapunov’s exponent. I suggest you stop thinking in terms of concepts like ‘improve’.
If something self improves at software level, that’ll be a piece of software created with very well defined model of changes to itself, and the very self improvement will be concerned with cutting down the solution space and cutting down the model. If something self improves at hardware level, likewise for the model of physics. Everyone wants artificial rainman. The autism is what you get from all sorts of random variations to baseline human brain; looks like the general intelligence that expands it’s model and doesn’t just focus intensely is a tiny spot in the design space. I don’t see why expect general intelligence to suddenly overtake specialized intelligences; the specialized intelligences have better people working on them, have the funding, and the specialization massively improves efficiency; superhuman specialized intelligences require lower hardware power.
Just because it doesn’t do exactly what you want doesn’t mean it is going to fail in some utterly spectacular way.
I certainly agree, and I am not even sure what the official SI position is on the probability of such failure. I know that Eliezer in hist writing does give the impression that any mistake will mean certain doom, which I believe to be an exaggeration. But failure of this kind is fundamentally unpredictable, and if a low probability even kills you, you are still dead, and I think that it is high enough that the Friendly AI type effort would not be wasted.
(ultimately, for solutions to systems of equations)
That is true in the trivial sense that everything can be described as equations, but when thinking how computation process actually happens this becomes almost meaningless. If the system is not constructed as a search problem over high dimensional spaces, then in particular its failure modes cannot be usefully thought about in such terms, even if it is fundamentally isomorphic to such a search.
that’ll be a piece of software created with very well defined model of changes to itself
Or it will be created by intuitively assembling random components and seeing what happens. In which case there is no guarantee what it will actually do to its own model or even to what it is actually solving for. Convincing AI researches to only allow an AI to self modify when it is stable under self modification is a significant part of the Friendly AI effort.
Everyone wants artificial rainman.
There are very few statements that are true about “everyone” and I am very confident that this is not one of them. Even if most people with actual means to build one want specialized and/or tool AIs, you only need one unfriendly-successful AGI project to potentially cause a lot of damage. This is especially true as both hardware costs fall and more AI knowledge is developed and published, lowering the entry costs.
I don’t see why expect general intelligence to suddenly overtake specialized intelligences;
To be dangerous AGI doesn’t have to overtake specialized intelligences, it has to overtake humans. Existence of specialized AIs is either irrelevant or increases the risks from AGI, since they would be available to both, and presumably AGIs would have lower interfacing costs.
I certainly agree, and I am not even sure what the official SI position is on the probability of such failure. I know that Eliezer in hist writing does give the impression that any mistake will mean certain doom, which I believe to be an exaggeration. But failure of this kind is fundamentally unpredictable, and if a low probability even kills you, you are still dead, and I think that it is high enough that the Friendly AI type effort would not be wasted.
Unpredictable is a subjective quality. It’d look much better if the people speaking of unpredictability had demonstrable accomplishment. If there is a trillion equally probable unpredictable outcomes, out of which only a small integer is destruction of mankind, even though it is still technically fundamentally unpredictable the probability is low. Unpredictability does not imply likehood of the scenario; if anything, unpredictability implies lower risk. I am sensing either a bias or dark arts; the unpredictable is a negative word. The highly specific predictions should be lowered in their probability when updating on the statement like ‘unpredictable’.
That is true in the trivial sense that everything can be described as equations, but when thinking how computation process actually happens this becomes almost meaningless.
Not everything is equally easy to describe as equations. For example we don’t know how to describe number of real world paperclips with a mathematical equation. We can describe performance of a design with equation, and then solve for maximum, but that is not identical to ‘maximizing performance of real world chip’.
If the system is not constructed as a search problem over high dimensional spaces, then in particular its failure modes cannot be usefully thought about in such terms, even if it is fundamentally isomorphic to such a search.
The problem is that of finding a point in a high dimensional space.
Or it will be created by intuitively assembling random components and seeing what happens. In which case there is no guarantee what it will actually do to its own model or even to what it is actually solving for. Convincing AI researches to only allow an AI to self modify when it is stable under self modification is a significant part of the Friendly AI effort.
I think you have a very narrow vision of ‘unstable’.
Even if most people with actual means to build one want specialized and/or tool AIs, you only need one unfriendly-successful AGI project to potentially cause a lot of damage. This is especially true as both hardware costs fall and more AI knowledge is developed and published, lowering the entry costs.
To be dangerous AGI has to win in the future ecosystem where the fruit been taken. The general is a positive sounding word, beware of halo effect.
To be dangerous AGI doesn’t have to overtake specialized intelligences, it has to overtake humans. Existence of specialized AIs is either irrelevant or increases the risks from AGI, since they would be available to both, and presumably AGIs would have lower interfacing costs.
I believe that is substantially incorrect. Suppose that there was an AGI in your basement, connected to internet, in the ecosystem of very powerful specialized AIs. The internet is secured by specialized network security AI and would have been taken by specialized botnet if it was not; you don’t have a chip fabrication plant in your basement; the specialized AIs elsewhere are running on massive hardware designing better computing substrates, better methods of solving, and so on. What exactly this AGI is going to do?
This is going nowhere. Too much anthropomorphization.
The highly specific predictions should be lowered in their probability when updating on the statement like ‘unpredictable’.
That depends what your initial probability is and why. If it already low due to updates on predictions about the system, then updating on “unpredictable” will increase the probability by lowering the strength of those predictions. Since destruction of humanity is rather important, even if the existential AI risk scenario is of low probability it matters exactly how low.
This of course has the same shape as Pascal’s mugging, but I do not believe that SI claims are of low enough probability to be dismissed as effectively zero.
Not everything is equally easy to describe as equations.
That was in fact my point, which might indicate that we are likely to be talking past each other. What I tried to say is that an artificial intelligence system is not necessarily constructed as an explicit optimization process over an explicit model. If the model and the process are implicit in its cognitive architecture then making predictions about what the system will do in terms of a search are of limited usefulness.
And even talking about models, getting back to this:
cutting down the solution space and cutting down the model
On further thought, this is not even necessarily true. The solution space and the model will have to be pre-cut by someone (presumably human engineers) who doesn’t know where the solution actually is. A self-improving system will have to expand both if the solution is outside them in order to find it. A system that can reach a solution even when initially over-constrained is more useful than the one that can’t, and so someone will build it.
I think you have a very narrow vision of ‘unstable’.
I do not understand what you are saying here. If you mean that by unstable I mean a highly specific trajectory a system that lost stability will follow, then it is because all those trajectories where the system crashes and burns are unimportant. If you have a trillion optimization systems on a planet running at the same time you have to be really sure that nothing can’t go wrong.
I just realized I derailed the discussion. The whole AGI in specialized AI world is irrelevant to what started this thread. In the sense of chronology of being developed I cannot tell how likely it is that AGI could overtake specialized intelligences. It really depends whether there is a critical insight missing for the constructions of AI. If it is just an extension of current software then specialized intelligences will win for reasons you state. Although some of the caveats I wrote above still apply.
If there is a critical difference in architecture between current software and AI then whoever hits that insight will likely overtake everyone else. If they happen to be working on AGI or even any system entangled with the real world, I don’t see how once can guarantee that the consequences will not be catastrophic.
Too much anthropomorphization.
Well, I in turn believe you are applying overzealous anti-anthropomorphization. Which is normally a perfectly good heuristic when dealing with software, but the fact is human intelligence is the only thing in “intelligence” reference class we have, and although AI will almost certainly be different they will not necessarily be different in every possible way. Especially considering the possibility of AI that are either directly base on human-like architecture or even are designed to directly interact with humans, which requires having at least some human-compatible models and behaviours.
That depends what your initial probability is and why. If it already low due to updates on predictions about the system, then updating on “unpredictable” will increase the probability by lowering the strength of those predictions. Since destruction of humanity is rather important, even if the existential AI risk scenario is of low probability it matters exactly how low.
The importance should not weight upon our estimation, unless you proclaim that I should succumb to a bias. Furthermore, it is the destruction of the mankind that is the prediction being made here. Via multitude of assumptions, the most dubious one being that the system will have real-world, physical goal. Number of paperclips is not easy.
On further thought, this is not even necessarily true. The solution space and the model will have to be pre-cut by someone (presumably human engineers) who doesn’t know where the solution actually is. A self-improving system will have to expand both if the solution is outside them in order to find it. A system that can reach a solution even when initially over-constrained is more useful than the one that can’t, and so someone will build it.
Sorry, you are factually wrong as of how the design of automatic tools work. Rest of your argument presses too hard to recruit multitude of importance related biases and cognitive fallacies that were described on this very site.
If you have a trillion optimization systems on a planet running at the same time you have to be really sure that nothing can’t go wrong.
No I don’t, if the systems that work right took all the low hanging fruit from picking by one that goes wrong.
Well, I in turn believe you are applying overzealous anti-anthropomorphization. Which is normally a perfectly good heuristic when dealing with software, but the fact is human intelligence is the only thing in “intelligence” reference class we have, and although AI will almost certainly be different they will not necessarily be different in every possible way. Especially considering the possibility of AI that are either directly base on human-like architecture or even are designed to directly interact with humans, which requires having at least some human-compatible models and behaviours.
You seem to keep forgetting of all the software that is fundamentally different from human mind, but solves the problems very well. The issue reads like a belief in extreme superiority of man over machine, except it is a superiority of anthropomorphized software over all other software.
Just because it doesn’t do exactly what you want doesn’t mean it is going to fail in some utterly spectacular way.
You aren’t searching for solutions to a real world problem, you are searching for solutions to a model (ultimately, for solutions to systems of equations), and not only you have limited solution space, you don’t model anything irrelevant. Furthermore, the search space is not 2d and not 3d, and not even 100d, the volume increases really rapidly with size. The predictions of many systems are fundamentally limited by Lyapunov’s exponent. I suggest you stop thinking in terms of concepts like ‘improve’.
If something self improves at software level, that’ll be a piece of software created with very well defined model of changes to itself, and the very self improvement will be concerned with cutting down the solution space and cutting down the model. If something self improves at hardware level, likewise for the model of physics. Everyone wants artificial rainman. The autism is what you get from all sorts of random variations to baseline human brain; looks like the general intelligence that expands it’s model and doesn’t just focus intensely is a tiny spot in the design space. I don’t see why expect general intelligence to suddenly overtake specialized intelligences; the specialized intelligences have better people working on them, have the funding, and the specialization massively improves efficiency; superhuman specialized intelligences require lower hardware power.
I certainly agree, and I am not even sure what the official SI position is on the probability of such failure. I know that Eliezer in hist writing does give the impression that any mistake will mean certain doom, which I believe to be an exaggeration. But failure of this kind is fundamentally unpredictable, and if a low probability even kills you, you are still dead, and I think that it is high enough that the Friendly AI type effort would not be wasted.
That is true in the trivial sense that everything can be described as equations, but when thinking how computation process actually happens this becomes almost meaningless. If the system is not constructed as a search problem over high dimensional spaces, then in particular its failure modes cannot be usefully thought about in such terms, even if it is fundamentally isomorphic to such a search.
Or it will be created by intuitively assembling random components and seeing what happens. In which case there is no guarantee what it will actually do to its own model or even to what it is actually solving for. Convincing AI researches to only allow an AI to self modify when it is stable under self modification is a significant part of the Friendly AI effort.
There are very few statements that are true about “everyone” and I am very confident that this is not one of them. Even if most people with actual means to build one want specialized and/or tool AIs, you only need one unfriendly-successful AGI project to potentially cause a lot of damage. This is especially true as both hardware costs fall and more AI knowledge is developed and published, lowering the entry costs.
To be dangerous AGI doesn’t have to overtake specialized intelligences, it has to overtake humans. Existence of specialized AIs is either irrelevant or increases the risks from AGI, since they would be available to both, and presumably AGIs would have lower interfacing costs.
Unpredictable is a subjective quality. It’d look much better if the people speaking of unpredictability had demonstrable accomplishment. If there is a trillion equally probable unpredictable outcomes, out of which only a small integer is destruction of mankind, even though it is still technically fundamentally unpredictable the probability is low. Unpredictability does not imply likehood of the scenario; if anything, unpredictability implies lower risk. I am sensing either a bias or dark arts; the unpredictable is a negative word. The highly specific predictions should be lowered in their probability when updating on the statement like ‘unpredictable’.
Not everything is equally easy to describe as equations. For example we don’t know how to describe number of real world paperclips with a mathematical equation. We can describe performance of a design with equation, and then solve for maximum, but that is not identical to ‘maximizing performance of real world chip’.
The problem is that of finding a point in a high dimensional space.
I think you have a very narrow vision of ‘unstable’.
To be dangerous AGI has to win in the future ecosystem where the fruit been taken. The general is a positive sounding word, beware of halo effect.
I believe that is substantially incorrect. Suppose that there was an AGI in your basement, connected to internet, in the ecosystem of very powerful specialized AIs. The internet is secured by specialized network security AI and would have been taken by specialized botnet if it was not; you don’t have a chip fabrication plant in your basement; the specialized AIs elsewhere are running on massive hardware designing better computing substrates, better methods of solving, and so on. What exactly this AGI is going to do?
This is going nowhere. Too much anthropomorphization.
That depends what your initial probability is and why. If it already low due to updates on predictions about the system, then updating on “unpredictable” will increase the probability by lowering the strength of those predictions. Since destruction of humanity is rather important, even if the existential AI risk scenario is of low probability it matters exactly how low.
This of course has the same shape as Pascal’s mugging, but I do not believe that SI claims are of low enough probability to be dismissed as effectively zero.
That was in fact my point, which might indicate that we are likely to be talking past each other. What I tried to say is that an artificial intelligence system is not necessarily constructed as an explicit optimization process over an explicit model. If the model and the process are implicit in its cognitive architecture then making predictions about what the system will do in terms of a search are of limited usefulness.
And even talking about models, getting back to this:
On further thought, this is not even necessarily true. The solution space and the model will have to be pre-cut by someone (presumably human engineers) who doesn’t know where the solution actually is. A self-improving system will have to expand both if the solution is outside them in order to find it. A system that can reach a solution even when initially over-constrained is more useful than the one that can’t, and so someone will build it.
I do not understand what you are saying here. If you mean that by unstable I mean a highly specific trajectory a system that lost stability will follow, then it is because all those trajectories where the system crashes and burns are unimportant. If you have a trillion optimization systems on a planet running at the same time you have to be really sure that nothing can’t go wrong.
I just realized I derailed the discussion. The whole AGI in specialized AI world is irrelevant to what started this thread. In the sense of chronology of being developed I cannot tell how likely it is that AGI could overtake specialized intelligences. It really depends whether there is a critical insight missing for the constructions of AI. If it is just an extension of current software then specialized intelligences will win for reasons you state. Although some of the caveats I wrote above still apply.
If there is a critical difference in architecture between current software and AI then whoever hits that insight will likely overtake everyone else. If they happen to be working on AGI or even any system entangled with the real world, I don’t see how once can guarantee that the consequences will not be catastrophic.
Well, I in turn believe you are applying overzealous anti-anthropomorphization. Which is normally a perfectly good heuristic when dealing with software, but the fact is human intelligence is the only thing in “intelligence” reference class we have, and although AI will almost certainly be different they will not necessarily be different in every possible way. Especially considering the possibility of AI that are either directly base on human-like architecture or even are designed to directly interact with humans, which requires having at least some human-compatible models and behaviours.
The importance should not weight upon our estimation, unless you proclaim that I should succumb to a bias. Furthermore, it is the destruction of the mankind that is the prediction being made here. Via multitude of assumptions, the most dubious one being that the system will have real-world, physical goal. Number of paperclips is not easy.
Sorry, you are factually wrong as of how the design of automatic tools work. Rest of your argument presses too hard to recruit multitude of importance related biases and cognitive fallacies that were described on this very site.
No I don’t, if the systems that work right took all the low hanging fruit from picking by one that goes wrong.
You seem to keep forgetting of all the software that is fundamentally different from human mind, but solves the problems very well. The issue reads like a belief in extreme superiority of man over machine, except it is a superiority of anthropomorphized software over all other software.