I never claimed that evolution did a good job, but I would argue that it gave us a primary directive; to further the human species. All of our desires are part of our programming; they should perfectly align with desires which would optimize the primary goal, but they don’t. Simply put, mistakes were made. As the most effective way of developing optimizing programs we have seen is through machine learning, which is very similar to evolution; we should be very careful of the desires of any singleton created by this method.
I’m not sure of your assertion that the best advances in AI so far came from mimicking the brain.
Mimicking the human brain is fundamental to most AI research; on DeepMind’s website, they say that they employ computational neuroscientists and companies such as IBM are very interested in whole brain emulation.
I never claimed that evolution did a good job, but I would argue that it gave us a primary directive; to further the human species.
No, it didn’t. That’s why I linked “Adaptation Executers, not Fitness Maximizers”. Evolution didn’t even “try to” give us a primary directive; it just increased the frequency of anything that worked on the margin. But I agree that we shouldn’t rely on machine learning to find the right utility function.
Only a pantheist would claim that evolution is a personal being, and so it can’t “try to” do anything. It is, however, a directed process, serving to favor individuals that can better further the species.
But I agree that we shouldn’t rely on machine learning to find the right utility function.
How would you suggest we find the right utility function without using machine learning?
How would you suggest we find the right utility function without using machine learning?
How would you find the right utility function using machine learning? With machine learning you have to have some way of classifying examples as good vs bad. That classifier itself is equivalent to the FAI problem.
The point I am making is that machine learning, though not provably safe, is the most effective way we can imagine of making the utility function. It’s very likely that many AI’s are going to be created by this method, and if the failure rate is anywhere near as high as that for humans, this could be very serious indeed. Some misguided person may attempt to create an FAI using machine learning and then we may have the situation in the H+ article
That wasn’t what I claimed, I proposed that the current, most promising methods of producing an FAI are far too likely to produce a UFAI to be considered safe
Why do you think the whole website is obsessed with provably-friendly AI?
The whole point of MIRI is that pretty much every superintelligence that is anything other than provably safe is going to be unfriendly! This site is littered with examples of how terribly almost-friendly AI would go wrong! We don’t consider current methods “too likely” to produce a UFAI, we think they’re almost certainly going to produce UFAI! (Conditional on creating a superintelligence at all, of course).
So as much as I hate asking this question because it’s alienating, have you read the sequences?
Mimicking the human brain is fundamental to most AI research; on DeepMind’s website, they say that they employ computational neuroscientists and companies such as IBM are very interested in whole brain emulation.
Mimicking the human brain is an obscure branch of AI. Most AI projects, and certainly the successful ones you’ve heard about, are at best inspired by stripped down models of specific isolated aspects of human thought, if they take any inspiration from the human brain at all.
DeepMind for example is reinforcement learning on top of modern machine learning. Machine learning may make use of neural networks, but beware of the name: neural networks only casually resemble the biological structure from which they take their name. DeepMind doesn’t work anything like the human brain, nor does Watson, Deep Blue, or self driving cars.
Learn a bit about practical AI and neuroscience and you’d be surprise how little they have in common.
I never claimed that evolution did a good job, but I would argue that it gave us a primary directive; to further the human species. All of our desires are part of our programming; they should perfectly align with desires which would optimize the primary goal, but they don’t. Simply put, mistakes were made. As the most effective way of developing optimizing programs we have seen is through machine learning, which is very similar to evolution; we should be very careful of the desires of any singleton created by this method.
Mimicking the human brain is fundamental to most AI research; on DeepMind’s website, they say that they employ computational neuroscientists and companies such as IBM are very interested in whole brain emulation.
No, it didn’t. That’s why I linked “Adaptation Executers, not Fitness Maximizers”. Evolution didn’t even “try to” give us a primary directive; it just increased the frequency of anything that worked on the margin. But I agree that we shouldn’t rely on machine learning to find the right utility function.
Only a pantheist would claim that evolution is a personal being, and so it can’t “try to” do anything. It is, however, a directed process, serving to favor individuals that can better further the species.
How would you suggest we find the right utility function without using machine learning?
How would you find the right utility function using machine learning? With machine learning you have to have some way of classifying examples as good vs bad. That classifier itself is equivalent to the FAI problem.
If I find out, you’ll be one of the first to know.
The point I am making is that machine learning, though not provably safe, is the most effective way we can imagine of making the utility function. It’s very likely that many AI’s are going to be created by this method, and if the failure rate is anywhere near as high as that for humans, this could be very serious indeed. Some misguided person may attempt to create an FAI using machine learning and then we may have the situation in the H+ article
Congratulations! You’ve figured out that UFAI is a threat!
That wasn’t what I claimed, I proposed that the current, most promising methods of producing an FAI are far too likely to produce a UFAI to be considered safe
Why do you think the whole website is obsessed with provably-friendly AI? The whole point of MIRI is that pretty much every superintelligence that is anything other than provably safe is going to be unfriendly! This site is littered with examples of how terribly almost-friendly AI would go wrong! We don’t consider current methods “too likely” to produce a UFAI, we think they’re almost certainly going to produce UFAI! (Conditional on creating a superintelligence at all, of course).
So as much as I hate asking this question because it’s alienating, have you read the sequences?
Mimicking the human brain is an obscure branch of AI. Most AI projects, and certainly the successful ones you’ve heard about, are at best inspired by stripped down models of specific isolated aspects of human thought, if they take any inspiration from the human brain at all.
DeepMind for example is reinforcement learning on top of modern machine learning. Machine learning may make use of neural networks, but beware of the name: neural networks only casually resemble the biological structure from which they take their name. DeepMind doesn’t work anything like the human brain, nor does Watson, Deep Blue, or self driving cars.
Learn a bit about practical AI and neuroscience and you’d be surprise how little they have in common.