If we are talking about a full-fledged general intelligence here (Skynet), there’s no arguing against any risk. I believe all we disagree about are definitions. That there are risks from advanced real-world (fictional) nanotechnology is indisputable. I’m merely saying that what researchers are working on is nanotechnology with the potential to lead to grey goo scenarios but that there is no inherent risk that any work on it will lead down the same pathway.
It is incredible hard to come up with an intelligence that knows what planning conists of and to know and care to be able to judge what step is instrumental. This won’t just happen accidently and will likely necessitate knowledge sufficient to be able to set scope boundaries as well. Again, this is not an argument that there is no risk but that it is not as strong as some people believe it to be.
If we are talking about a full-fledged general intelligence here (Skynet), there’s no arguing against any risk. I believe all we disagree about are definitions. That there are risks from advanced real-world (fictional) nanotechnology is indisputable. I’m merely saying that what researchers are working on is nanotechnology with the potential to lead to grey goo scenarios but that there is no inherent risk that any work on it will lead down the same pathway.
Please keep focus, which is one of the most important tools. The above paragraph is unrelated to what I addressed in this conversation.
It is incredible hard to come up with an intelligence that knows what planning consists of and to know and care to be able to judge what step is instrumental. This won’t just happen accidentally and will likely necessitate knowledge sufficient to be able to set scope boundaries as well.
Review the above paragraph: what you are saying is that AIs are hard to build. But of course chess AIs do plan, to give an example. They don’t perform only the moves they are “told” to perform.
What I am talking about is that full-fledged AGI is incredible hard to achieve and that therefore most of all AGI projects will fail on something other than limiting the AGI’s scope. Therefore it is not likely that work on AGI is as dangerous as proposed.
That is, it is much more likely that any given chess AI will fail to beat a human player than that it will win. Still the researchers are working on chess AI’s and the chess AI’s will suit the definition of a general chess AI. Yet to get everything about a chess AI exactly right to beat any human but fail to implement certain performance boundaries (e.g. strength of its play or that it will overheat its CPU’s etc.) is an unlikely outcome. It is more likely that it will be good at chess but not superhuman, that it will fail to improve, slow or biased than that it will succeed on all of the previous and additionally leave its scope boundaries.
So the discussion is about if the idea that any work on AGI is incredible dangerous is strong or if it can be weakened.
Because planning consists in figuring out instrumental steps on your own.
If we are talking about a full-fledged general intelligence here (Skynet), there’s no arguing against any risk. I believe all we disagree about are definitions. That there are risks from advanced real-world (fictional) nanotechnology is indisputable. I’m merely saying that what researchers are working on is nanotechnology with the potential to lead to grey goo scenarios but that there is no inherent risk that any work on it will lead down the same pathway.
It is incredible hard to come up with an intelligence that knows what planning conists of and to know and care to be able to judge what step is instrumental. This won’t just happen accidently and will likely necessitate knowledge sufficient to be able to set scope boundaries as well. Again, this is not an argument that there is no risk but that it is not as strong as some people believe it to be.
Please keep focus, which is one of the most important tools. The above paragraph is unrelated to what I addressed in this conversation.
Review the above paragraph: what you are saying is that AIs are hard to build. But of course chess AIs do plan, to give an example. They don’t perform only the moves they are “told” to perform.
What I am talking about is that full-fledged AGI is incredible hard to achieve and that therefore most of all AGI projects will fail on something other than limiting the AGI’s scope. Therefore it is not likely that work on AGI is as dangerous as proposed.
That is, it is much more likely that any given chess AI will fail to beat a human player than that it will win. Still the researchers are working on chess AI’s and the chess AI’s will suit the definition of a general chess AI. Yet to get everything about a chess AI exactly right to beat any human but fail to implement certain performance boundaries (e.g. strength of its play or that it will overheat its CPU’s etc.) is an unlikely outcome. It is more likely that it will be good at chess but not superhuman, that it will fail to improve, slow or biased than that it will succeed on all of the previous and additionally leave its scope boundaries.
So the discussion is about if the idea that any work on AGI is incredible dangerous is strong or if it can be weakened.
Yes, broken AIs, such as humans or chimps, are possible.