As with almost any such question, meaning is not inherent in the thing itself, but is given by various people, with no guarantee that anyone will agree.
In other words, it depends on who you ask. :)
For at least some people, who subscribe to the information-pattern theory of identity, a whole brain emulation based on their own brains is at least as good a continuation of their own selves as their original brain would have been, and there are certain advantages to existing in the form of software, such as being able to have multiple off-site backups. Others, who may be focused on the risks of Unfriendly AI, may deem WBEs to be the closest that we’ll be able to get to a Friendly AI before an Unfriendly one starts making paperclips. Others may just want to have the technology available to solve certain scientific mysteries with. There are plenty more such points.
You’d have to ask someone else, I consider it a waste of time. De novo AGI will arrive far, far before we come anywhere close to achieving real-time whole-brain emulation.
And I don’t subscribe to the information-pattern theory of identity for what to me seems obvious experimental reasons, so I don’t see that as a viable route to personal longevity.
De novo AGI will arrive far, far before we come anywhere close to achieving real-time whole-brain emulation.
What’s the best current knowledge for estimating the effort needed for de novo AGI? I find the unknown unknowns with the whole thing where we still don’t seem to really have an idea how everything is supposed to go together worrying for blanket statements like this. We do have a roadmap for whole-brain emulation, but I haven’t seen anything like that for de novo AGI.
And that’s the problem I have. WBE looks like a thing that’ll probably take decades, but we know that the specific solution exists and from neuroscience we have a lot of information about its general properties.
With de novo AGI, beyond knowing that the WBE solution exists, what do we know about solutions we could come up on our own? It seems to me like this could be solved in 10 years or in 100 years, and you can’t really make an informed judgment that the 10 years timeframe is much more probable.
But if you want to discount the WBE approach as not worth the time, you’d pretty much want to claim reason to believe that a 10-20 year timeframe for de novo AGI is exceedingly probable. Beyond that, you’re up against 50-year projects of focused study on WBE with present-day and future computing power, and that sort of thing does look like something where you should assign a significant probability to it producing results.
The thing is, artificial general intelligence is a fairly dead field, even by the standards of AI. There has been a lack of progress, but that is due perhaps more to lack of activity than any inherent difficulty of the problem (although it is a difficult problem). So estimating the effort needed for de novo AI with a presumption of adequate funding cannot be done by fitting curves to past performance. The outside view fails us here, and we need to take the inside view and look at the details.
De novo AGI is not as tightly contstrained a problem as whole-brain emulation. For whole brain emulation, the only seriously considered approach is to scan the brain at sufficient detail, and then perform a sufficiently accurate simulation. There’s a lot of room to quibble about what “sufficient” means in those contexts, destructive vs non-destructive scanning, and other details, but there is a certain amount of unity around the overall idea. You can define the end-state goal in the form of a roadmap, and measure your progress towards it as the entire field has alignment towards the roadmap.
Such a roadmap does not and really cannot exist for AGI (although there have been attempts to do so). The problem is the nature of “de novo AGI”: “de novo” means new, without reference to existing intelligences, and if you open up your problem space like that there are an indefinate number of possible solutions with various tradeoffs and people value those tradeoffs differently. So the field is fractured and it’s really hard to get everybody to agree on a single roadmap.
Pat Langly thinks that good old-fashioned AI has the solution, and we just just need to learn how to constrain inference. Pei Wang thinks that new probabalistic reasoning systems is what is required. Paul Rosenbloom thinks that representation is what matters, and the core of AGI is a framework for reasoning about graphical models. Jeff Hawkins thinks that a hierarchical network of deep learning agents is all that’s required and that it’s mostly a scaling and data structuring problem. Ray Kurzweil has similar biologically inspired ideas. Ben Goertzel thinks they’re all correct and the key is having a common shared framework for moderately intelligent implementations of all of these ideas to collaborate together, and human-level intelligence is achieved from the union.
Goertzel has an approachable collection of essays out on the subject based on a talk he gave sadly almost 10 years ago titled “10 years to the singularity if we really, really try” (spoiler: over the the last 10 years we didn’t really try). It is available as a free PDF here. He also has an actual technical roadmap to achieving AGI which was published as a two-volume book, linked to on LW here. I admit to being much more partial to Goertzel’s approach. And while 10 years seems optimistic for all except the Apollo Program / Manhatten Project funding assumptions, it could be doable under that model. And there are shortcut paths for the less safety-inclined.
Without a common roadmap for AGI it is difficult to get an outsider to agree that AGI could be achieved in a particular timeframe with a particular resource allocation. And it seems particularly impossible to get the entire AGI community to agree on a single roadmap given the diversity of opinions over what approaches we should take and the lack of centralized funding resources. But the best I can fall back on is if you ask any single competent person in this space how quickly a sufficiently advanced AGI could be obtained if sufficient resources were instantly allocated to their favored approach, the answer you’d get would be in the range of 5 to 15 years. “10 years to the singularity if we really, really try” is not a bad summary. We may disagree greatly on the details, and that disunity is keeping us back, but the outcome seems reasonable if coordination and funding problems were solved.
And yes, ~10 years is far less time than the WBE roadmap predicts. So there’s no question as to where I hang my hat in that debate. AGI is a leapfrog technology that has the potential to bring about a singularity event much earlier than any emulative route. Although my day job is currently unrelated (bitcoin), so I can’t profess that I am part of the solution yet, in all honesty.
A little off-topic—what’s the point of whole-brain emulation?
As with almost any such question, meaning is not inherent in the thing itself, but is given by various people, with no guarantee that anyone will agree.
In other words, it depends on who you ask. :)
For at least some people, who subscribe to the information-pattern theory of identity, a whole brain emulation based on their own brains is at least as good a continuation of their own selves as their original brain would have been, and there are certain advantages to existing in the form of software, such as being able to have multiple off-site backups. Others, who may be focused on the risks of Unfriendly AI, may deem WBEs to be the closest that we’ll be able to get to a Friendly AI before an Unfriendly one starts making paperclips. Others may just want to have the technology available to solve certain scientific mysteries with. There are plenty more such points.
You’d have to ask someone else, I consider it a waste of time. De novo AGI will arrive far, far before we come anywhere close to achieving real-time whole-brain emulation.
And I don’t subscribe to the information-pattern theory of identity for what to me seems obvious experimental reasons, so I don’t see that as a viable route to personal longevity.
What’s the best current knowledge for estimating the effort needed for de novo AGI? I find the unknown unknowns with the whole thing where we still don’t seem to really have an idea how everything is supposed to go together worrying for blanket statements like this. We do have a roadmap for whole-brain emulation, but I haven’t seen anything like that for de novo AGI.
And that’s the problem I have. WBE looks like a thing that’ll probably take decades, but we know that the specific solution exists and from neuroscience we have a lot of information about its general properties.
With de novo AGI, beyond knowing that the WBE solution exists, what do we know about solutions we could come up on our own? It seems to me like this could be solved in 10 years or in 100 years, and you can’t really make an informed judgment that the 10 years timeframe is much more probable.
But if you want to discount the WBE approach as not worth the time, you’d pretty much want to claim reason to believe that a 10-20 year timeframe for de novo AGI is exceedingly probable. Beyond that, you’re up against 50-year projects of focused study on WBE with present-day and future computing power, and that sort of thing does look like something where you should assign a significant probability to it producing results.
The thing is, artificial general intelligence is a fairly dead field, even by the standards of AI. There has been a lack of progress, but that is due perhaps more to lack of activity than any inherent difficulty of the problem (although it is a difficult problem). So estimating the effort needed for de novo AI with a presumption of adequate funding cannot be done by fitting curves to past performance. The outside view fails us here, and we need to take the inside view and look at the details.
De novo AGI is not as tightly contstrained a problem as whole-brain emulation. For whole brain emulation, the only seriously considered approach is to scan the brain at sufficient detail, and then perform a sufficiently accurate simulation. There’s a lot of room to quibble about what “sufficient” means in those contexts, destructive vs non-destructive scanning, and other details, but there is a certain amount of unity around the overall idea. You can define the end-state goal in the form of a roadmap, and measure your progress towards it as the entire field has alignment towards the roadmap.
Such a roadmap does not and really cannot exist for AGI (although there have been attempts to do so). The problem is the nature of “de novo AGI”: “de novo” means new, without reference to existing intelligences, and if you open up your problem space like that there are an indefinate number of possible solutions with various tradeoffs and people value those tradeoffs differently. So the field is fractured and it’s really hard to get everybody to agree on a single roadmap.
Pat Langly thinks that good old-fashioned AI has the solution, and we just just need to learn how to constrain inference. Pei Wang thinks that new probabalistic reasoning systems is what is required. Paul Rosenbloom thinks that representation is what matters, and the core of AGI is a framework for reasoning about graphical models. Jeff Hawkins thinks that a hierarchical network of deep learning agents is all that’s required and that it’s mostly a scaling and data structuring problem. Ray Kurzweil has similar biologically inspired ideas. Ben Goertzel thinks they’re all correct and the key is having a common shared framework for moderately intelligent implementations of all of these ideas to collaborate together, and human-level intelligence is achieved from the union.
Goertzel has an approachable collection of essays out on the subject based on a talk he gave sadly almost 10 years ago titled “10 years to the singularity if we really, really try” (spoiler: over the the last 10 years we didn’t really try). It is available as a free PDF here. He also has an actual technical roadmap to achieving AGI which was published as a two-volume book, linked to on LW here. I admit to being much more partial to Goertzel’s approach. And while 10 years seems optimistic for all except the Apollo Program / Manhatten Project funding assumptions, it could be doable under that model. And there are shortcut paths for the less safety-inclined.
Without a common roadmap for AGI it is difficult to get an outsider to agree that AGI could be achieved in a particular timeframe with a particular resource allocation. And it seems particularly impossible to get the entire AGI community to agree on a single roadmap given the diversity of opinions over what approaches we should take and the lack of centralized funding resources. But the best I can fall back on is if you ask any single competent person in this space how quickly a sufficiently advanced AGI could be obtained if sufficient resources were instantly allocated to their favored approach, the answer you’d get would be in the range of 5 to 15 years. “10 years to the singularity if we really, really try” is not a bad summary. We may disagree greatly on the details, and that disunity is keeping us back, but the outcome seems reasonable if coordination and funding problems were solved.
And yes, ~10 years is far less time than the WBE roadmap predicts. So there’s no question as to where I hang my hat in that debate. AGI is a leapfrog technology that has the potential to bring about a singularity event much earlier than any emulative route. Although my day job is currently unrelated (bitcoin), so I can’t profess that I am part of the solution yet, in all honesty.