Mobileye’s pedestrian detection technology is based on the use of mono cameras only, using advanced pattern recognition and classifiers with image processing and optic flow analysis. Both static and moving pedestrians can be detected to a range of around 30m using VGA resolution imagers. As higher resolution imagers become available range will scale with imager resolution, making detection ranges of up to 60m feasible.
That’s nice but probably not what multifoliaterose had in mind. The company has 200 employees and was founded in 1999. I’d be impressed if the SIAI managed to come up with anything nearly as sophisticated. I suppose such companies would call CEV better science fiction. Or do you think they would hire Yudkowsky, if he wanted to get hired? I’m not sure if he is skilled enough to work for such a company. Have you seen any proof of his math or programming skills that would allow you to conclude that he would be able to come up with such a pedestrian detection software, let alone friendliness? (ETA I believe such questions are justified, after all it is important to assess the ability of the SIAI.)
I am a programmer, and I for one, do not see a very strong connection between the potential for building an AGI and programming ability. An AGI isn’t going to come about because you made a really sweet sorting algorithm, it’s going to come about because you had a key insight about what thought is (or something along those lines). 1337 programming skillz probably doesn’t help a lot with that.
AGI requires John von Neumann or Alan Turing or the like. Any of them would have decent programming expertise today.
The AGI requires something, that would also result in acquisition of familiarity with the tool-set of mankind, including the actual use of computers for reasoning, which requires you to be able to program. It is enough that the programming expertise might be useful, for the upcoming AGI insight maker to become a good programmer.
I am a programmer, and I for one, do not see a very strong connection between the potential for building an AGI and programming ability.
Do you think that intelligence is going to be quite simple with hindsight? Something like Einstein’s mass–energy equivalence formula? Because if it is ‘modularly’ then I don’t see how programmers, or mathematicians, won’t be instrumental in making progress towards AGI. Take for example IBM Watson:
When a question is put to Watson, more than 100 algorithms analyze the question in different ways, and find many different plausible answers–all at the same time. Yet another set of algorithms ranks the answers and gives them a score. For each possible answer, Watson finds evidence that may support or refute that answer. So for each of hundreds of possible answers it finds hundreds of bits of evidence and then with hundreds of algorithms scores the degree to which the evidence supports the answer. The answer with the best evidence assessment will earn the most confidence. The highest-ranking answer becomes the answer. However, during a Jeopardy! game, if the highest-ranking possible answer isn’t rated high enough to give Watson enough confidence, Watson decides not to buzz in and risk losing money if it’s wrong. The Watson computer does all of this in about three seconds.
It needs a company like IBM to design such a narrow AI. More than 100 algorithms. Could it have been done without a lot of computational and intellectual resources? Can progress be made without tapping into the workings of the human brain, without designing specially optimized hardware, without programming and debugging? Does it really only need some smarts and contemplation to come up with a few key insights to get something that can take over the universe? I’d be interested to learn on how one can arrive at that conclusion.
I agree mathematicians are likely to useful in making AGI. If the folks as SIAI were terrible at math, that would be a bad sign indeed.
I wouldn’t say ‘simple’ but, I would be surprised if it were complex in the same way that Watson is complex. Watson is complex because statistical algorithms can be complex, and Watson has a lot of them. As far as I can tell, there’s nothing conceptually revolutionary about Watson, it’s just a neat and impressive statistical application. I don’t see a strong relationship between Watson-like narrow AI and the goal of AGI.
An AGI might have a lot of algorithms (because intelligence turns out to have a lot of separate components), but the difficulty will be understanding the nature of intelligence and coming up with algorithms and proving the important properties about those algorithms. I wouldn’t expect “practical implementation” to be a separate step where you need programmers because I would expect everything to be implemented in some kind of proof environment.
I too am rather sceptical about the Singularity Institute’s programming skills.
LessWrong itself seems like a nice hack—and there have been some bits and pieces in Java—but hardly enough to inspire confidence in their ability to write reliable and safe software.
Perhaps—if they get sufficient money from donations—they will be able to hire some expert programmers.
Will these groups kiss and make up—if and when the time finally comes to getting down to the business of coding? Or will they remain antagonistic. Stay tuned—I figure.
As long as you are or have a skilled enough programmer to identify skilled programmers. My experience working as a programmer suggests that this skill is damn hard to cultivate without just being a good programmer yourself.
My impression is that Eliezer is sufficiently capable that he would be able to identify skilled programmers (I can not guess about others in SIAI). More importantly he is sufficiently well connected that he would be able to hire someone who is fairly well recognised as an expert programmer specifically to consult for the task of identifying and recruiting less well known but more available talent to work at SIAI.
Someone like, say Paul Graham (apparently a skilled programmer, definitely skilled at identifying talent) could perhaps be persuaded to consult.
But the real point is that the task of getting skilled programmers pales in comparison to the task of find people to do AI theory. That is really hard. Implementing the results of the theory is child’s play by comparison.
But the real point is that the task of getting skilled programmers pales in comparison to the task of find people to do AI theory. That is really hard. Implementing the results of the theory is child’s play by comparison.
At the risk of repeating myself, I’m not someone who can’t (ETA I meant can not) really judge this. I have my doubts however that AI theory is a conceptual problem that can be solved by thinking about it. My guess is that you’ll have to do hard science to make progress on the subject. Do research on the human brain, run large scale experiments on supercomputers, build or invent specially optimized hardware, coding and debugging and so on. That it just needs some smarts to come up with a few key insights, I don’t see that. How do people arrive at that conclusion?
The task of getting skilled programmers is probably part of the solution, because resources, intellectual and otherwise, might be instrumental to make progress on the problem.
That it just needs some smarts to come up with a few key insights
It is amazing how much difference words like ‘just’ and ‘a few’ can make! This is an extremely hard problem. All sorts of other skills are required but those skills are commodities. They already exist, people have them, you buy them.
What is required to solve something like AIs that are stable when upgrading is extremely intelligent individuals studying the best work in several related fields full time for 15 years… and then having ‘a few insights’.
I think that XiXiDu’s point is that the theory and implementation can not cleanly be divorced. You may need to be constantly programming and trying out the ideas your theory spit out in order to guide and shape the theory to its final, correct form. We can’t necessarily just wait until the theory is developed and then buy the available skill needed to implement it.
That it just needs some smarts to come up with a few key insights, I don’t see that. How do people arrive at that conclusion?
Well nobody really knows, one way or the other.
So far, machine intelligence has mostly been 60 years of sluggish, gradual progress. On the other hand, we have a pretty neat theory of forecasting—which is the guts of the problem. Maybe we have been doing it all wrong—and there’s a silver bullet that nobody has stumbled across yet.
I think the SIAI folk have enough knowledge to deal with this, and there are enough good programmers associated with LW to help if needed. I know for fact cousin_it built some cool software, I’m sure there are others.
I think there might some slight difference in quality in the problems of observing a wide volume of physical reality by varying means and in varying circumstances and unerringly recognizing certain persistent physical processes as corresponding to agents with moral standing such as living, conscious humans, in order to formulate open-ended plans to guarantee their ethically preferred mode of continued existence, and that of analyzing a video feed for patterns that roughly correspond to a 2D projection of an average-human-shaped object and doing some pathfinding adjustments to avoid bumping into it.
An existing machine—in this “Volvo S60 - Pedestrian Detection” advert—seems to identify humans, and behave nicely towards them.
Here is more:
That’s nice but probably not what multifoliaterose had in mind. The company has 200 employees and was founded in 1999. I’d be impressed if the SIAI managed to come up with anything nearly as sophisticated. I suppose such companies would call CEV better science fiction. Or do you think they would hire Yudkowsky, if he wanted to get hired? I’m not sure if he is skilled enough to work for such a company. Have you seen any proof of his math or programming skills that would allow you to conclude that he would be able to come up with such a pedestrian detection software, let alone friendliness? (ETA I believe such questions are justified, after all it is important to assess the ability of the SIAI.)
I am a programmer, and I for one, do not see a very strong connection between the potential for building an AGI and programming ability. An AGI isn’t going to come about because you made a really sweet sorting algorithm, it’s going to come about because you had a key insight about what thought is (or something along those lines). 1337 programming skillz probably doesn’t help a lot with that.
Agreed. AGI requires Judea Pearl more than it requires John Carmack.
AGI requires John von Neumann or Alan Turing or the like. Any of them would have decent programming expertise today.
The AGI requires something, that would also result in acquisition of familiarity with the tool-set of mankind, including the actual use of computers for reasoning, which requires you to be able to program. It is enough that the programming expertise might be useful, for the upcoming AGI insight maker to become a good programmer.
Do you think that intelligence is going to be quite simple with hindsight? Something like Einstein’s mass–energy equivalence formula? Because if it is ‘modularly’ then I don’t see how programmers, or mathematicians, won’t be instrumental in making progress towards AGI. Take for example IBM Watson:
It needs a company like IBM to design such a narrow AI. More than 100 algorithms. Could it have been done without a lot of computational and intellectual resources? Can progress be made without tapping into the workings of the human brain, without designing specially optimized hardware, without programming and debugging? Does it really only need some smarts and contemplation to come up with a few key insights to get something that can take over the universe? I’d be interested to learn on how one can arrive at that conclusion.
I agree mathematicians are likely to useful in making AGI. If the folks as SIAI were terrible at math, that would be a bad sign indeed.
I wouldn’t say ‘simple’ but, I would be surprised if it were complex in the same way that Watson is complex. Watson is complex because statistical algorithms can be complex, and Watson has a lot of them. As far as I can tell, there’s nothing conceptually revolutionary about Watson, it’s just a neat and impressive statistical application. I don’t see a strong relationship between Watson-like narrow AI and the goal of AGI.
An AGI might have a lot of algorithms (because intelligence turns out to have a lot of separate components), but the difficulty will be understanding the nature of intelligence and coming up with algorithms and proving the important properties about those algorithms. I wouldn’t expect “practical implementation” to be a separate step where you need programmers because I would expect everything to be implemented in some kind of proof environment.
I too am rather sceptical about the Singularity Institute’s programming skills.
LessWrong itself seems like a nice hack—and there have been some bits and pieces in Java—but hardly enough to inspire confidence in their ability to write reliable and safe software.
Perhaps—if they get sufficient money from donations—they will be able to hire some expert programmers.
They do seem to be pissing off some of the community of experts in the artificial intelligence community a little, though—with their fear-mongering marketing strategy.
Will these groups kiss and make up—if and when the time finally comes to getting down to the business of coding? Or will they remain antagonistic. Stay tuned—I figure.
Hiring expert programmers is trivially easy compared to making the conceptual breakthroughs required for AGI and Friendly AI.
Making a general-purpose intelligent agent is mostly a software engineering problem.
It will probably take programmers to solve it—since they are the most experienced at solving such problems.
That doesn’t solve the problem of what to do with such an agent, of course. That is not so much a software engineering problem.
Not coded by the SIAI as far as I know.
Not that I think it matters much. If you want good programmers you hire them. That part is trivial.
As long as you are or have a skilled enough programmer to identify skilled programmers. My experience working as a programmer suggests that this skill is damn hard to cultivate without just being a good programmer yourself.
My impression is that Eliezer is sufficiently capable that he would be able to identify skilled programmers (I can not guess about others in SIAI). More importantly he is sufficiently well connected that he would be able to hire someone who is fairly well recognised as an expert programmer specifically to consult for the task of identifying and recruiting less well known but more available talent to work at SIAI.
Someone like, say Paul Graham (apparently a skilled programmer, definitely skilled at identifying talent) could perhaps be persuaded to consult.
But the real point is that the task of getting skilled programmers pales in comparison to the task of find people to do AI theory. That is really hard. Implementing the results of the theory is child’s play by comparison.
At the risk of repeating myself, I’m not someone who can’t (ETA I meant can not) really judge this. I have my doubts however that AI theory is a conceptual problem that can be solved by thinking about it. My guess is that you’ll have to do hard science to make progress on the subject. Do research on the human brain, run large scale experiments on supercomputers, build or invent specially optimized hardware, coding and debugging and so on. That it just needs some smarts to come up with a few key insights, I don’t see that. How do people arrive at that conclusion?
The task of getting skilled programmers is probably part of the solution, because resources, intellectual and otherwise, might be instrumental to make progress on the problem.
It is amazing how much difference words like ‘just’ and ‘a few’ can make! This is an extremely hard problem. All sorts of other skills are required but those skills are commodities. They already exist, people have them, you buy them.
What is required to solve something like AIs that are stable when upgrading is extremely intelligent individuals studying the best work in several related fields full time for 15 years… and then having ‘a few insights’.
I think that XiXiDu’s point is that the theory and implementation can not cleanly be divorced. You may need to be constantly programming and trying out the ideas your theory spit out in order to guide and shape the theory to its final, correct form. We can’t necessarily just wait until the theory is developed and then buy the available skill needed to implement it.
Note that the full senctence was:
Well nobody really knows, one way or the other.
So far, machine intelligence has mostly been 60 years of sluggish, gradual progress. On the other hand, we have a pretty neat theory of forecasting—which is the guts of the problem. Maybe we have been doing it all wrong—and there’s a silver bullet that nobody has stumbled across yet.
Hiring good programmers without being a good programmer is far from trivial.
I think the SIAI folk have enough knowledge to deal with this, and there are enough good programmers associated with LW to help if needed. I know for fact cousin_it built some cool software, I’m sure there are others.
I assume that as genetic and bio-engineering and non-organic augmentation come into play, recognizing humans is going to get harder.
I think there might some slight difference in quality in the problems of observing a wide volume of physical reality by varying means and in varying circumstances and unerringly recognizing certain persistent physical processes as corresponding to agents with moral standing such as living, conscious humans, in order to formulate open-ended plans to guarantee their ethically preferred mode of continued existence, and that of analyzing a video feed for patterns that roughly correspond to a 2D projection of an average-human-shaped object and doing some pathfinding adjustments to avoid bumping into it.