After taking a look at the research pages, I’m not very afraid of these people, at least not until they get computers powerful enough to brute-force AGI by simulated evolution or some other method. I’m more afraid of Shane Legg who does top-notch technical work (far beyond anything I’m capable of), understands the danger of uFAI and ranks it as the #1 existential risk, and still cheers for stuff like Monte Carlo AIXI. I’m afraid of Abram Demski who wrote brilliant comments on LW and still got paid to help design a self-improving AGI (Genifer).
24 out of 26?! Since Eliezer won his first two, I was already reasonably certain that AI boxing is effectively impossible (at least once you give it the permission to talk to some humans), so I won’t meaningfully update here. But this piece of evidence was quite unexpected.
A mind designed by evolution could be big and messy, about as complex as the human brain. Right now we have no computer powerful enough to simulate even a single human brain, and evolution requires many of those. Of course there are many possible shortcuts, but we don’t seem to be there yet.
The question really is, can a program with an evolutionary algorithm in its core can do something better than a small elite of talented humans (with a help of computer programs) can?
The answer is yes, it can do it today, it can do it presently and it does.
People here on this list are mostly highly dismissive about “stupid evolution everybody can do, but it’s a CPU time waster”.
All of the latter has been evolved in a digital environment with no additional expert knowledge of humans. Sooner or later, we will be evolving pretty much everything. All the big talk about the AI of some web experts aside.
After taking a look at the research pages, I’m not very afraid of these people, at least not until they get computers powerful enough to brute-force AGI by simulated evolution or some other method. I’m more afraid of Shane Legg who does top-notch technical work (far beyond anything I’m capable of), understands the danger of uFAI and ranks it as the #1 existential risk, and still cheers for stuff like Monte Carlo AIXI. I’m afraid of Abram Demski who wrote brilliant comments on LW and still got paid to help design a self-improving AGI (Genifer).
It would help me a lot if you could email or pm me the names of people who you are afraid of so that I can contact them. Thank you.
email: xixidu@gmail.com or da@kruel.co
You could also try contacting Justin Corwin who won 24 out of 26 AI-box experiments and now develops AGI at a2i2.
24 out of 26?! Since Eliezer won his first two, I was already reasonably certain that AI boxing is effectively impossible (at least once you give it the permission to talk to some humans), so I won’t meaningfully update here. But this piece of evidence was quite unexpected.
Those (three) people are not in AI field, at least for my taste. But:
Why do you think, present computers are not fast enough for a digital evolution of X?
A mind designed by evolution could be big and messy, about as complex as the human brain. Right now we have no computer powerful enough to simulate even a single human brain, and evolution requires many of those. Of course there are many possible shortcuts, but we don’t seem to be there yet.
The question really is, can a program with an evolutionary algorithm in its core can do something better than a small elite of talented humans (with a help of computer programs) can?
The answer is yes, it can do it today, it can do it presently and it does.
People here on this list are mostly highly dismissive about “stupid evolution everybody can do, but it’s a CPU time waster”.
See!
or
All of the latter has been evolved in a digital environment with no additional expert knowledge of humans. Sooner or later, we will be evolving pretty much everything. All the big talk about the AI of some web experts aside.
When it will seem we are, we’ll already be beyond of there.