Note that FAI research, AI research, and AGI research are three very different things.
Currently, FAI research is conducted solely by the Singularity Institute and researchers associated with them. Looking at SI’s publications over the last few years, the FAI research program has more in common with philosophy and machine ethics than programming.
More traditional AI research, which is largely conducted at universities, consists mostly of programming and math and is conducted with the intent to solve specific problems. For the most part, AI researchers aren’t trying to build general intelligence of the kind discussed on LW, and a lot of AI work is split into sub-fields like machine learning, planning, natural language processing, etc. (I’m an undergraduate intern for an AI research project at a US university. Feel free to PM with questions.)
AGI research mostly consists of stuff like OpenCog or Numenta, i.e. near-term projects that attempt to create general intelligence.
It’s also worth remembering that AGI research isn’t taken very seriously by some (most?) AI researchers, and the notion of FAI isn’t even on their radar.
This is useful, and suggests “learn programming” is useful preparation for both work on AI and just converting math ability into $.
One thing it seems to leave out, though, is the stuff Nick Bostrom has done on AI, which isn’t strictly about FAI, though it is related. Perhaps we need to add a category of “general strategic thinking on how to navigate the coming of AI.”
I should learn more about AGI projects. My initial guess is that near-term projects are hopeless, but in their “Intelligence Explosion” paper Luke and Anna express the view that a couple AGI projects have a chance of succeeding relatively soon. I should know more about that. Where to begin learning about AGI?
As I understand it, “learn programming” is essential for working in pretty much every sub-field of computer science.
The stuff Nick Bostrom has done is generally put into the category machine ethics. (Note that this is seen as a sub-field of philosophy rather than computer science.)
I don’t know much about the AGI community beyond a few hours of Googling. My understanding is that there are a handful of AGI journals, the largest and most mainstream being this one. Then there are independent AGI projects, which are sometimes looked down upon for being hopelessly optimistic. I don’t know where you should begin learning about the field—I suppose Wikipedia is as good a place as any.
Note that FAI research, AI research, and AGI research are three very different things.
Perhaps FAI differs, but AI and AGI really just describe project scale/scope/ambition variance within the same field.
It’s also worth remembering that AGI research isn’t taken very seriously by some (most?) AI researchers,
Most of the high profile AI researchers all seem to point to AGI as the long term goal, and this appears just as true today as it was in the early days. There may be a trend to downplay the AGI-Singularity associations.
Perhaps FAI differs, but AI and AGI really just describe project scale/scope/ambition variance within the same field.
My understanding is that AI and AGI differ in terms of who is currently carrying out the research. The term “AI research” (when contrasted with “AGI research”) usually refers to narrow-AI research projects carried out by computer scientists. “AGI research” has become more associated with independent projects and the AGI community, which has differentiated itself somewhat from the AI/CS community at large. Still, there are many prominent AI researchers and computer scientists who are involved with AGI, e.g. these people. “FAI research” is something completely different—it’s not a recognized field (the term only has meaning to people who have heard of the Singularity Institute), and currently it consists of philosophy-of-AI papers.
Most of the high profile AI researchers all seem to point to AGI as the long term goal, and this appears just as true today as it was in the early days.
Yes, but they are much more pessimistic about when such intelligences will be developed than they were, say, 30 years ago, and academics no longer start AI research projects with the announcement that they will create general intelligence in a short period of time. Independent AGI projects of the sort we see today do make those kind of grandiose announcements, which is why they attract scorn from academics.
There may be a trend to downplay the AGI-Singularity associations.
I’m an undergraduate intern for an AI research project at a US university.
Off-topic, potentially interesting to other undergrads: Do you have information/experience on the competitiveness* of summer research programs (particularly REUs or other formally organized programs) in AI (or CS in general), relative to programs in physics/astronomy and math (or other disciplines)?
(I’ve found that recent math REUs seem to be substantially more competitive than similar physics/astronomy programs. Obviously, this is a broad observation that should not be applied to individual programs.)
*Competitiveness as in difficulty of being accepted by a program.
Note that FAI research, AI research, and AGI research are three very different things.
Currently, FAI research is conducted solely by the Singularity Institute and researchers associated with them. Looking at SI’s publications over the last few years, the FAI research program has more in common with philosophy and machine ethics than programming.
More traditional AI research, which is largely conducted at universities, consists mostly of programming and math and is conducted with the intent to solve specific problems. For the most part, AI researchers aren’t trying to build general intelligence of the kind discussed on LW, and a lot of AI work is split into sub-fields like machine learning, planning, natural language processing, etc. (I’m an undergraduate intern for an AI research project at a US university. Feel free to PM with questions.)
AGI research mostly consists of stuff like OpenCog or Numenta, i.e. near-term projects that attempt to create general intelligence.
It’s also worth remembering that AGI research isn’t taken very seriously by some (most?) AI researchers, and the notion of FAI isn’t even on their radar.
This is useful, and suggests “learn programming” is useful preparation for both work on AI and just converting math ability into $.
One thing it seems to leave out, though, is the stuff Nick Bostrom has done on AI, which isn’t strictly about FAI, though it is related. Perhaps we need to add a category of “general strategic thinking on how to navigate the coming of AI.”
I should learn more about AGI projects. My initial guess is that near-term projects are hopeless, but in their “Intelligence Explosion” paper Luke and Anna express the view that a couple AGI projects have a chance of succeeding relatively soon. I should know more about that. Where to begin learning about AGI?
As I understand it, “learn programming” is essential for working in pretty much every sub-field of computer science.
The stuff Nick Bostrom has done is generally put into the category machine ethics. (Note that this is seen as a sub-field of philosophy rather than computer science.)
I don’t know much about the AGI community beyond a few hours of Googling. My understanding is that there are a handful of AGI journals, the largest and most mainstream being this one. Then there are independent AGI projects, which are sometimes looked down upon for being hopelessly optimistic. I don’t know where you should begin learning about the field—I suppose Wikipedia is as good a place as any.
That’s the only one.
Thanks for the correction! I have no idea why I thought there were more.
Perhaps FAI differs, but AI and AGI really just describe project scale/scope/ambition variance within the same field.
Most of the high profile AI researchers all seem to point to AGI as the long term goal, and this appears just as true today as it was in the early days. There may be a trend to downplay the AGI-Singularity associations.
My understanding is that AI and AGI differ in terms of who is currently carrying out the research. The term “AI research” (when contrasted with “AGI research”) usually refers to narrow-AI research projects carried out by computer scientists. “AGI research” has become more associated with independent projects and the AGI community, which has differentiated itself somewhat from the AI/CS community at large. Still, there are many prominent AI researchers and computer scientists who are involved with AGI, e.g. these people. “FAI research” is something completely different—it’s not a recognized field (the term only has meaning to people who have heard of the Singularity Institute), and currently it consists of philosophy-of-AI papers.
Yes, but they are much more pessimistic about when such intelligences will be developed than they were, say, 30 years ago, and academics no longer start AI research projects with the announcement that they will create general intelligence in a short period of time. Independent AGI projects of the sort we see today do make those kind of grandiose announcements, which is why they attract scorn from academics.
Definitely.
Off-topic, potentially interesting to other undergrads: Do you have information/experience on the competitiveness* of summer research programs (particularly REUs or other formally organized programs) in AI (or CS in general), relative to programs in physics/astronomy and math (or other disciplines)?
(I’ve found that recent math REUs seem to be substantially more competitive than similar physics/astronomy programs. Obviously, this is a broad observation that should not be applied to individual programs.)
*Competitiveness as in difficulty of being accepted by a program.
Sorry, no idea. What I’m doing isn’t really a formally organized summer research program, I’m just an undergraduate assistant.