I’d run it by people I know who are not cherry-picked to have rather unusual views.
A good point. Do you perhaps know some? Unfortunately, AI is a very divided field on the subject of predicting what actual implementations of proposed algorithms will really do.
He’s hardly the only expert.
Please, find me a greater expert in AGI than Juergen Schmidhuber. Someone with more publications in peer-reviewed journals, more awards, more victories at learning competitions, more grants given by committees of tenured professors. Shane Legg and Marcus Hutter worked in his lab.
As we normally define credibility (ie: a very credible scientist is one with many publications and grants who works as a senior, tenured professor at a state-sponsored university), Schmidhuber is probably the most credible expert on this subject, as far as I’m aware.
A good point. Do you perhaps know some? Unfortunately, AI is a very divided field on the subject of predicting what actual implementations of proposed algorithms will really do.
I’d talk with some mathematicians.
Please, find me a greater expert in AGI than Juergen Schmidhuber.
Interestingly in the quoted piece he said he doesn’t think friendly AI is possible, and endorsed both the hard take-off (perhaps he means something different by this) and AI wars...
By the way I’d support his group as far as ‘safety’ goes: neural networks would seem particularly unlikely to undergo said “hard take-off”, and assuming gradual improvement, before the AI that goes around killing everyone, in the lines of AIs that tend not to learn what we want, we’d be getting an AI which (for example) whines very annoyingly just like my dog right now does, and for all the pattern recognition powers, can’t even get into the cupboard with the dog food. Getting stuck in a local maximum where annoying approaches are not explored, is a desirable feature in a learning process.
Interestingly in the quoted piece he said he doesn’t think friendly AI is possible
And this is where I’d disagree with him, being probably more knowledgeable in machine ethics than him. Ethical AI is difficult, but I would argue it’s definitely possible. That is, I don’t believe human notions of goodness are so completely, utterly incoherent that we will hate any and all possible universes into which we are placed, and certainly there have existed humans who loved their lives and their world.
If we don’t hate all universes and we love some universes, then the issue is just locating the universes we love and sifting them out from the ones we hate. That might be very difficult, but I don’t believe it’s impossible.
endorsed both the hard take-off (perhaps he means something different by this) and AI wars...
He did design the non-neural Goedel Machine to basically make a hard take-off happen. On purpose. He’s a man of immense chutzpah, and I mean that with all possible admiration.
That is, I don’t believe human notions of goodness are so completely, utterly incoherent
The problem is that as a rational “utility function” things like human desires, or pain, must be defined down at the basic level of computational operations performed by human brains (and the ‘computational operations performed by something’ might itself not even be a definable concept).
Then there’s also ontology issue.
All the optimality guarantees for things like Solomonoff Induction are for predictions, not for the internal stuff inside the model—works great for pressing your button, not so much for determining what people exists and what they want.
For the same observable data, there’s the most probable theory, but there’s also a slightly more complex theory which has far more people at stake. Picture a rather small modification to the theory which multiple-invokes the original theory and makes an enormous number of people get killed depending on the number of anti-protons in this universe, or other such variable that the AI can influence. There’s a definite potential of getting, say, an antimatter maximizer or blackhole minimizer or something equally silly from a provably friendly AI that maximizes expected value over an ontology that has a subtle flaw. Proofs do not extend to checking the sanity of assumptions.
He did design the non-neural Goedel Machine to basically make a hard take-off happen. On purpose. He’s a man of immense chutzpah, and I mean that with all possible admiration.
To be honest, I just fail to be impressed with things such as AIXI or Goedel machine (which admittedly is cooler than the former).
I see as main obstacle to that kind of “neat AI” the reliance on extremely effective algorithms for things such as theorem proving (especially in the presence of logical uncertainty). Most people capable of doing such work would rather work on something that makes use of present and near future technologies. Things like Goedel machine seem to require far more power from the theorem prover than I would consider to be sufficient for the first person to create an AGI.
A good point. Do you perhaps know some? Unfortunately, AI is a very divided field on the subject of predicting what actual implementations of proposed algorithms will really do.
Please, find me a greater expert in AGI than Juergen Schmidhuber. Someone with more publications in peer-reviewed journals, more awards, more victories at learning competitions, more grants given by committees of tenured professors. Shane Legg and Marcus Hutter worked in his lab.
As we normally define credibility (ie: a very credible scientist is one with many publications and grants who works as a senior, tenured professor at a state-sponsored university), Schmidhuber is probably the most credible expert on this subject, as far as I’m aware.
I’d talk with some mathematicians.
Interestingly in the quoted piece he said he doesn’t think friendly AI is possible, and endorsed both the hard take-off (perhaps he means something different by this) and AI wars...
By the way I’d support his group as far as ‘safety’ goes: neural networks would seem particularly unlikely to undergo said “hard take-off”, and assuming gradual improvement, before the AI that goes around killing everyone, in the lines of AIs that tend not to learn what we want, we’d be getting an AI which (for example) whines very annoyingly just like my dog right now does, and for all the pattern recognition powers, can’t even get into the cupboard with the dog food. Getting stuck in a local maximum where annoying approaches are not explored, is a desirable feature in a learning process.
And this is where I’d disagree with him, being probably more knowledgeable in machine ethics than him. Ethical AI is difficult, but I would argue it’s definitely possible. That is, I don’t believe human notions of goodness are so completely, utterly incoherent that we will hate any and all possible universes into which we are placed, and certainly there have existed humans who loved their lives and their world.
If we don’t hate all universes and we love some universes, then the issue is just locating the universes we love and sifting them out from the ones we hate. That might be very difficult, but I don’t believe it’s impossible.
He did design the non-neural Goedel Machine to basically make a hard take-off happen. On purpose. He’s a man of immense chutzpah, and I mean that with all possible admiration.
The problem is that as a rational “utility function” things like human desires, or pain, must be defined down at the basic level of computational operations performed by human brains (and the ‘computational operations performed by something’ might itself not even be a definable concept).
Then there’s also ontology issue.
All the optimality guarantees for things like Solomonoff Induction are for predictions, not for the internal stuff inside the model—works great for pressing your button, not so much for determining what people exists and what they want.
For the same observable data, there’s the most probable theory, but there’s also a slightly more complex theory which has far more people at stake. Picture a rather small modification to the theory which multiple-invokes the original theory and makes an enormous number of people get killed depending on the number of anti-protons in this universe, or other such variable that the AI can influence. There’s a definite potential of getting, say, an antimatter maximizer or blackhole minimizer or something equally silly from a provably friendly AI that maximizes expected value over an ontology that has a subtle flaw. Proofs do not extend to checking the sanity of assumptions.
To be honest, I just fail to be impressed with things such as AIXI or Goedel machine (which admittedly is cooler than the former).
I see as main obstacle to that kind of “neat AI” the reliance on extremely effective algorithms for things such as theorem proving (especially in the presence of logical uncertainty). Most people capable of doing such work would rather work on something that makes use of present and near future technologies. Things like Goedel machine seem to require far more power from the theorem prover than I would consider to be sufficient for the first person to create an AGI.