What do you feel are the most pressing unsolved problems in AGI?
In AGI? If you mean “what problems in AI do we need to solve before we can get to the human level”, then I would say:
Ability to solve currently intractable statistical inference problems (probably not just by scaling up computational resources, since many of these problems have exponentially large search spaces).
Ways to cope with domain adaptation and model mis-specification.
Robust and modular statistical procedures that can be fruitfully fit together.
Large amounts of data, in formats helpful for learning (potentially including provisions for high-throughput interaction, perhaps with a virtual environment).
To some extent this reflects my own biases, and I don’t mean to say “if we solve these problems then we’ll basically have AI”, but I do think it will either get us much closer or else expose new challenges that are not currently apparent.
Do you believe AGI can “FOOM” (you may have to qualify what you interpret FOOM as)?
I think it is possible that a human-level AI would very quickly acquire a lot of resources / power. I am more skeptical that an AI would become qualitatively more intelligent than a human, but even if it was no more intelligent than a human but had the ability to easily copy and transmit itself, that would already make it powerful enough to be a serious threat (note that it is also quite possible that it would have many more cycles of computation per second than a biological brain).
In general I think this is one of many possible scenarios, e.g. it’s also possible that sub-human AI would already have control of much of the world’s resources and we would have built systems in place to deal with this fact. So I think it can be useful to imagine such a scenario but I wouldn’t stake my decisions on the assumption that something like it will occur. I think this report does a decent job of elucidating the role of such narratives (not necessarily AI-related) in making projections about the future.
How viable is the scenario of someone creating a AGI in their basement, thereby changing the course of history in unpredictable ways?
In AGI? If you mean “what problems in AI do we need to solve before we can get to the human level”, then I would say:
Ability to solve currently intractable statistical inference problems (probably not just by scaling up computational resources, since many of these problems have exponentially large search spaces).
Ways to cope with domain adaptation and model mis-specification.
Robust and modular statistical procedures that can be fruitfully fit together.
Large amounts of data, in formats helpful for learning (potentially including provisions for high-throughput interaction, perhaps with a virtual environment).
To some extent this reflects my own biases, and I don’t mean to say “if we solve these problems then we’ll basically have AI”, but I do think it will either get us much closer or else expose new challenges that are not currently apparent.
I think it is possible that a human-level AI would very quickly acquire a lot of resources / power. I am more skeptical that an AI would become qualitatively more intelligent than a human, but even if it was no more intelligent than a human but had the ability to easily copy and transmit itself, that would already make it powerful enough to be a serious threat (note that it is also quite possible that it would have many more cycles of computation per second than a biological brain).
In general I think this is one of many possible scenarios, e.g. it’s also possible that sub-human AI would already have control of much of the world’s resources and we would have built systems in place to deal with this fact. So I think it can be useful to imagine such a scenario but I wouldn’t stake my decisions on the assumption that something like it will occur. I think this report does a decent job of elucidating the role of such narratives (not necessarily AI-related) in making projections about the future.
Not viable.