I am a programmer, and I for one, do not see a very strong connection between the potential for building an AGI and programming ability. An AGI isn’t going to come about because you made a really sweet sorting algorithm, it’s going to come about because you had a key insight about what thought is (or something along those lines). 1337 programming skillz probably doesn’t help a lot with that.
AGI requires John von Neumann or Alan Turing or the like. Any of them would have decent programming expertise today.
The AGI requires something, that would also result in acquisition of familiarity with the tool-set of mankind, including the actual use of computers for reasoning, which requires you to be able to program. It is enough that the programming expertise might be useful, for the upcoming AGI insight maker to become a good programmer.
I am a programmer, and I for one, do not see a very strong connection between the potential for building an AGI and programming ability.
Do you think that intelligence is going to be quite simple with hindsight? Something like Einstein’s mass–energy equivalence formula? Because if it is ‘modularly’ then I don’t see how programmers, or mathematicians, won’t be instrumental in making progress towards AGI. Take for example IBM Watson:
When a question is put to Watson, more than 100 algorithms analyze the question in different ways, and find many different plausible answers–all at the same time. Yet another set of algorithms ranks the answers and gives them a score. For each possible answer, Watson finds evidence that may support or refute that answer. So for each of hundreds of possible answers it finds hundreds of bits of evidence and then with hundreds of algorithms scores the degree to which the evidence supports the answer. The answer with the best evidence assessment will earn the most confidence. The highest-ranking answer becomes the answer. However, during a Jeopardy! game, if the highest-ranking possible answer isn’t rated high enough to give Watson enough confidence, Watson decides not to buzz in and risk losing money if it’s wrong. The Watson computer does all of this in about three seconds.
It needs a company like IBM to design such a narrow AI. More than 100 algorithms. Could it have been done without a lot of computational and intellectual resources? Can progress be made without tapping into the workings of the human brain, without designing specially optimized hardware, without programming and debugging? Does it really only need some smarts and contemplation to come up with a few key insights to get something that can take over the universe? I’d be interested to learn on how one can arrive at that conclusion.
I agree mathematicians are likely to useful in making AGI. If the folks as SIAI were terrible at math, that would be a bad sign indeed.
I wouldn’t say ‘simple’ but, I would be surprised if it were complex in the same way that Watson is complex. Watson is complex because statistical algorithms can be complex, and Watson has a lot of them. As far as I can tell, there’s nothing conceptually revolutionary about Watson, it’s just a neat and impressive statistical application. I don’t see a strong relationship between Watson-like narrow AI and the goal of AGI.
An AGI might have a lot of algorithms (because intelligence turns out to have a lot of separate components), but the difficulty will be understanding the nature of intelligence and coming up with algorithms and proving the important properties about those algorithms. I wouldn’t expect “practical implementation” to be a separate step where you need programmers because I would expect everything to be implemented in some kind of proof environment.
I am a programmer, and I for one, do not see a very strong connection between the potential for building an AGI and programming ability. An AGI isn’t going to come about because you made a really sweet sorting algorithm, it’s going to come about because you had a key insight about what thought is (or something along those lines). 1337 programming skillz probably doesn’t help a lot with that.
Agreed. AGI requires Judea Pearl more than it requires John Carmack.
AGI requires John von Neumann or Alan Turing or the like. Any of them would have decent programming expertise today.
The AGI requires something, that would also result in acquisition of familiarity with the tool-set of mankind, including the actual use of computers for reasoning, which requires you to be able to program. It is enough that the programming expertise might be useful, for the upcoming AGI insight maker to become a good programmer.
Do you think that intelligence is going to be quite simple with hindsight? Something like Einstein’s mass–energy equivalence formula? Because if it is ‘modularly’ then I don’t see how programmers, or mathematicians, won’t be instrumental in making progress towards AGI. Take for example IBM Watson:
It needs a company like IBM to design such a narrow AI. More than 100 algorithms. Could it have been done without a lot of computational and intellectual resources? Can progress be made without tapping into the workings of the human brain, without designing specially optimized hardware, without programming and debugging? Does it really only need some smarts and contemplation to come up with a few key insights to get something that can take over the universe? I’d be interested to learn on how one can arrive at that conclusion.
I agree mathematicians are likely to useful in making AGI. If the folks as SIAI were terrible at math, that would be a bad sign indeed.
I wouldn’t say ‘simple’ but, I would be surprised if it were complex in the same way that Watson is complex. Watson is complex because statistical algorithms can be complex, and Watson has a lot of them. As far as I can tell, there’s nothing conceptually revolutionary about Watson, it’s just a neat and impressive statistical application. I don’t see a strong relationship between Watson-like narrow AI and the goal of AGI.
An AGI might have a lot of algorithms (because intelligence turns out to have a lot of separate components), but the difficulty will be understanding the nature of intelligence and coming up with algorithms and proving the important properties about those algorithms. I wouldn’t expect “practical implementation” to be a separate step where you need programmers because I would expect everything to be implemented in some kind of proof environment.