In Video 12, Eliezer says that the SIAI is probably not going to be funding any Ad Hoc AI programs that may or may not produce any lightning bolts of AH-HA! or Eureka Moments.
He also says that he believes that any recursive self-improving AI that is created must be done so (created) to very high standards of precision (so that we don’t die in the process)...
Given these two things. what exactly is the SIAI going to be funding?
It uncomfortably reminds me of preachers on TV/radio who spend all their air time trying to convert new people as opposed to answering the question “OK, I’m a Christian, now what should I do?” The fact that they don’t address any follow up questions really hurts their credibility.
Many of these projects seem to address peripheral/marketing issues instead of addressing the central, nitty-gritty technical details required for developing GAI. That worries me a bit.
Working on papers submitted to peer-reviewed scientific journals is not marketing but research.
If SIAI wants to build some credibility then it needs some publications in scientific journals. Doing so could help to ensure further funding and development of actual implementations.
I think that it is a very good idea to first formulate and publish the theoretical basis for the work they intend to do, rather than just saying: we need money to develop component X of our friendly AI.
Of course a possible outcome will be that the scientific community will deem the research shallow, unoriginal or unrealistic to implement. However, it is necessary to publish the ideas before they can be reviewed.
So my take on this is that SIAI is merely asking for a chance to demonstrate their skills rather than for blind commitment.
I expect that developing AI to the desired standards is not currently a project that can be moved forward by throwing money at it (at least not money at the scale SIAI has to work with).
I can’t speak for SIAI, but were I personally tasked with “arrange the creation an AI that will start a positive singularity” my strategy for the next several years at least would center on publicity and recruiting.
I do not think I am as pessimistic as drcode about the work that I see the SIAI doing. At first, it did strike me as similar to the televangelist, but then I began thinking that all of the works on the SIAI projects list could very well influence people who are going to be doing the hard work of putting code to machine (Hopefully, as I will be doing eventually).
I think it was Soulless Automaton below who suggested that the SIAI is probably not yet to the point where they can make grants to doing the actual work of creating AGI/FAI.
In Video 12, Eliezer says that the SIAI is probably not going to be funding any Ad Hoc AI programs that may or may not produce any lightning bolts of AH-HA! or Eureka Moments.
He also says that he believes that any recursive self-improving AI that is created must be done so (created) to very high standards of precision (so that we don’t die in the process)...
Given these two things. what exactly is the SIAI going to be funding?
These projects, for example...
Hmm… that list of projects worries me a little...
It uncomfortably reminds me of preachers on TV/radio who spend all their air time trying to convert new people as opposed to answering the question “OK, I’m a Christian, now what should I do?” The fact that they don’t address any follow up questions really hurts their credibility.
Many of these projects seem to address peripheral/marketing issues instead of addressing the central, nitty-gritty technical details required for developing GAI. That worries me a bit.
Working on papers submitted to peer-reviewed scientific journals is not marketing but research.
If SIAI wants to build some credibility then it needs some publications in scientific journals. Doing so could help to ensure further funding and development of actual implementations.
I think that it is a very good idea to first formulate and publish the theoretical basis for the work they intend to do, rather than just saying: we need money to develop component X of our friendly AI.
Of course a possible outcome will be that the scientific community will deem the research shallow, unoriginal or unrealistic to implement. However, it is necessary to publish the ideas before they can be reviewed.
So my take on this is that SIAI is merely asking for a chance to demonstrate their skills rather than for blind commitment.
I expect that developing AI to the desired standards is not currently a project that can be moved forward by throwing money at it (at least not money at the scale SIAI has to work with).
I can’t speak for SIAI, but were I personally tasked with “arrange the creation an AI that will start a positive singularity” my strategy for the next several years at least would center on publicity and recruiting.
I do not think I am as pessimistic as drcode about the work that I see the SIAI doing. At first, it did strike me as similar to the televangelist, but then I began thinking that all of the works on the SIAI projects list could very well influence people who are going to be doing the hard work of putting code to machine (Hopefully, as I will be doing eventually).
I think it was Soulless Automaton below who suggested that the SIAI is probably not yet to the point where they can make grants to doing the actual work of creating AGI/FAI.