I talk to a guy on a private AGI IRC server sometimes. He now works for them. He does some really impressive AI work.
He can’t talk about most of the stuff he is working on now due to NDAs. But he did mention he is working on (and has worked with in the past) evolving learning rules for AIs instead of hand coding them.
I discussed AI risk with him, but he doesn’t particularly care about it. He thinks an intelligence explosion is possible, but that an unfriendly AI wouldn’t be so bad. It would just be the next step of evolution. I see the same view in some of the comments on that blog post, though I’m not sure if they are from members of that organization.
I see similar kinds of views about AI risk in even well respected and accomplished AI researchers like Jürgen Schmidhuber.
The other thing different about this company is they come from the game industry. They appear to have written their own NN code from scratch in CUDA. It works on windows, and has a good user interface.
I talk to a guy on a private AGI IRC server sometimes. He now works for them. He does some really impressive AI work.
He can’t talk about most of the stuff he is working on now due to NDAs. But he did mention he is working on (and has worked with in the past) evolving learning rules for AIs instead of hand coding them.
I discussed AI risk with him, but he doesn’t particularly care about it. He thinks an intelligence explosion is possible, but that an unfriendly AI wouldn’t be so bad. It would just be the next step of evolution. I see the same view in some of the comments on that blog post, though I’m not sure if they are from members of that organization.
I see similar kinds of views about AI risk in even well respected and accomplished AI researchers like Jürgen Schmidhuber.
The other thing different about this company is they come from the game industry. They appear to have written their own NN code from scratch in CUDA. It works on windows, and has a good user interface.