And while it’s not a really good optimizer, if you run it too long**, it’s “not safe” (it made us after all), - it can get to GI (or AGI).
a camera
I don’t think cameras think. They have a simple state which is determined by the settings (unless somethings broken), and they take images. While we could argue that they have a memory 1) in the form of images and video, and 2) in the form of state, they don’t do things with that information. (While you might be able to use them for video editing, that’s largely a manual process, and is mostly performed on (bigger) computers designed for more general use.)
Comparatively, the point of Machine Learning is...machine learning. Today that’s learning to recognize things (like human faces) and do things (like video games). If things get to the point where I don’t drive a car, instead my car drives itself—that will be pretty big.
a known and well-understood algorithm
And the thing about some of this new stuff like neural nets is that we don’t. They’re not hardcoded. They learn. And the more difficult/complex the task is, the harder they can be to understand. Currently, I’m more worried about a system which we don’t understand being put in an important role and failing unexpectedly, or people using these tools for sinister ends (I’m not the biggest fan of facial recognition tech), than a super-intelligent AGI. AlphaStar didn’t win by hacking its opponent’s computer or causing a seizure.
*Or evolutionary algorithms
**What you’re running it on (the problem you’re trying to solve, and how complex it’s allowed to get) might also matter.
Just to be clear, when OpenAI trained GPT-2, I am not saying that GPT-2 is a known and well-understood algorithm for generating text, but rather that SGD (Stochastic Gradient Descent) is a known and well-understood algorithm for generating GPT-2. (I mean, OK sure, ML researchers are still studying SGD, but its inner workings are not an impenetrable mystery the way that GPT-2′s are.)
The concern is basically:
Evolution* is well understood.
And while it’s not a really good optimizer, if you run it too long**, it’s “not safe” (it made us after all), - it can get to GI (or AGI).
I don’t think cameras think. They have a simple state which is determined by the settings (unless somethings broken), and they take images. While we could argue that they have a memory 1) in the form of images and video, and 2) in the form of state, they don’t do things with that information. (While you might be able to use them for video editing, that’s largely a manual process, and is mostly performed on (bigger) computers designed for more general use.)
Comparatively, the point of Machine Learning is...machine learning. Today that’s learning to recognize things (like human faces) and do things (like video games). If things get to the point where I don’t drive a car, instead my car drives itself—that will be pretty big.
And the thing about some of this new stuff like neural nets is that we don’t. They’re not hardcoded. They learn. And the more difficult/complex the task is, the harder they can be to understand. Currently, I’m more worried about a system which we don’t understand being put in an important role and failing unexpectedly, or people using these tools for sinister ends (I’m not the biggest fan of facial recognition tech), than a super-intelligent AGI. AlphaStar didn’t win by hacking its opponent’s computer or causing a seizure.
*Or evolutionary algorithms
**What you’re running it on (the problem you’re trying to solve, and how complex it’s allowed to get) might also matter.
Just to be clear, when OpenAI trained GPT-2, I am not saying that GPT-2 is a known and well-understood algorithm for generating text, but rather that SGD (Stochastic Gradient Descent) is a known and well-understood algorithm for generating GPT-2. (I mean, OK sure, ML researchers are still studying SGD, but its inner workings are not an impenetrable mystery the way that GPT-2′s are.)