Assuming you had access to your own source code you would almost certainly have to dramatically alter your personality to cope with subjective centuries of no meaningful social interaction.
It doesn’t have to be subjective centuries. There are many solitary, ascetic humans who have lived on their own for months or years.
Also, if you could make one such brain, you could then make two or three, and then they would each have some company even when running at full speeds.
we’re much further from having an adequate understanding of the brain to simulate it than we are from developing this technology
Most people have no idea how the brain works, but some have much better ideas than others.
Computational neuroscience is progressing along quickly. We do have a good idea of the shape of computations that the cortex does (spatio-temporal hierarchical bayesian inference), and we can already recreate some of that circuit functionality in simulations today (largely I’m thinking of Poggio’s work at MIT).
Did you check the link I posted? We may be able to recreate some of the circuit functionality of the brain, but that doesn’t mean we’re anywhere close to understanding the brain well enough to create a working model. We don’t even know how much we don’t know.
There are few ascetic humans who have lived without human contact for as much as a decade, which would pass in less than six minutes of objective time.
If you make enough of such brains that they could reasonably keep each other company, and give them human like psychology, they’re unlikely to care much about or relate to humans who live so slowly that they’re almost impossible to meaninfully communicate with by comparison.
In the future, we probably will create AI of some description which thinks dramatically faster than humans do, and we may also upload our minds, possibly with some revision, to much faster analogues once we’ve made arrangements for a society that can function at that pace. But creating the first such AIs by modeling human brains is simply not a good or credible idea.
Yes, it’s an unreccomended review of a book. Do glial cells have an important role in the brain? Yes. Do they significantly increase the computational costs of functionally equivalent circuits—absolutely not.
The brain has to handle much more complexity than an AGI brain—the organic brain has to self-assemble out of cells and it has to provide all of it’s own chemical batteries to run the ion pumps. An AGI brain can use an external power supply, so it just needs to focus on the computational aspects.
We may be able to recreate some of the circuit functionality of the brain, but that doesn’t mean we’re anywhere close to understanding the brain well enough to create a working model
The most important part of the brain is the cortex. It is built out of a highly repeated simpler circuit that computational neuroscientists have studied extensively and actually understand fairly well—enough to start implementing.
Do we understand everything that circuit does in every brain region all the time? Probably. not.
Most of the remaining missing knowledge is about the higher level connection architecture between regions and interactions with the thalamus, hippocampus and cerebellum.
We don’t necessarily need to understand all of this to build an AGI with a cortex that thinks somewhat like us. We also have general AI theory to guide us.
But creating the first such AIs by modeling human brains is simply not a good or credible idea.
Whether or not it is a good idea is one question, but it absolutely is a credible idea. In fact, it is the most credible idea for building AGI, but the analysis for that is longer and more complex. I’ve written some about that on my site, I’m going to write up an intro summary of the state of brain-AGI research and why it’s the promising path.
It’s unrecommended because it’s badly written, not because it doesn’t have worthwhile content. The glial cells serve a purpose such that the brain will not produce identical output if you exclude them from the model, and we still don’t have a good understanding of how the interaction works; until recently, we haven’t even paid much attention to studying it.
Most of the remaining missing knowledge is about the higher level connection architecture between regions and interactions with the thalamus, hippocampus and cerebellum.
We don’t necessarily need to understand all of this to build an AGI with a cortex that thinks somewhat like us. We also have general AI theory to guide us.
General AI theory that has so far failed to produce anything close to a general AI.
Whether or not it is a good idea is one question, but it absolutely is a credible idea. In fact, it is the most credible idea for building AGI, but the analysis for that is longer and more complex. I’ve written some about that on my site, I’m going to write up an intro summary of the state of brain-AGI research and why it’s the promising path.
You’ve already posted arguments to that effect on this site, note that they have tended to be disputed and downvoted.
We don’t necessarily need to understand all of this to build an AGI with a cortex that thinks somewhat like us. We also have general AI theory to guide us.
General AI theory that has so far failed to produce anything close to a general AI.
We don’t yet have economical computer systems that have 10^14 memory capacities and the ability to perform 100-1000 memory/ops on all the memory every second. The world’s largest GPU supercomputers are getting there, but doing it the naive way might take thousands of GPUs, and even then the interconnect is expensive.
We understood the feasibility and general design space of nuclear weapons and space travel long before we had the detailed knowledge and industrial capacity to build such technologies.
We understood the feasibility and general design space of nuclear weapons and space travel long before we had the detailed knowledge and industrial capacity to build such technologies.
11 years (Szilard’s patent in 1934 to Trinity in 1945) is ‘long before’?
11 years (Szilard’s patent in 1934 to Trinity in 1945) is ‘long before’?
Ok, so space travel may be a better example, depending on how far we trace back the idea’s origins. But I do think that we could develop AGI in around a decade if we made an Apollo project out of it (14 year program costing around $170 billion in 2005 dollars).
Perhaps, but as Eliezer has gone to some lengths to point out, the great majority of those working on AGI simply have no concept of how difficult the problem is, of the magnitude of the gulf between their knowledge and what they’d need to solve the problem. And solving some aspects of the problem without solving others can be extraordinarily dangerous. I think you’re handwaving away issues that are dramatically more problematic than you give them credit for.
Perhaps, but as Eliezer has gone to some lengths to point out, the great majority of those working on AGI simply have no concept of how difficult the problem is, of the magnitude of the gulf between their knowledge and what they’d need to solve the problem.
There is an observational bias involved here. If you do look at the problem of AGI and come to understand it you realize just how difficult it is and you are likely to move to work on a less ambitious narrow-AI precursor. This leaves a much smaller remainder trying to work on AGI, including the bunch that doesn’t understand the difficulty.
I think you’re handwaving away issues that are dramatically more problematic than you give them credit for.
If you are talking about the technical issues, I think 1-100 billion and 5-20 years is a good cost estimate.
As for the danger issues, yes of course this will be the most powerful and thus most dangerous invention we ever make. The last, really.
It doesn’t have to be subjective centuries. There are many solitary, ascetic humans who have lived on their own for months or years.
Also, if you could make one such brain, you could then make two or three, and then they would each have some company even when running at full speeds.
Most people have no idea how the brain works, but some have much better ideas than others.
Computational neuroscience is progressing along quickly. We do have a good idea of the shape of computations that the cortex does (spatio-temporal hierarchical bayesian inference), and we can already recreate some of that circuit functionality in simulations today (largely I’m thinking of Poggio’s work at MIT).
Did you check the link I posted? We may be able to recreate some of the circuit functionality of the brain, but that doesn’t mean we’re anywhere close to understanding the brain well enough to create a working model. We don’t even know how much we don’t know.
There are few ascetic humans who have lived without human contact for as much as a decade, which would pass in less than six minutes of objective time.
If you make enough of such brains that they could reasonably keep each other company, and give them human like psychology, they’re unlikely to care much about or relate to humans who live so slowly that they’re almost impossible to meaninfully communicate with by comparison.
In the future, we probably will create AI of some description which thinks dramatically faster than humans do, and we may also upload our minds, possibly with some revision, to much faster analogues once we’ve made arrangements for a society that can function at that pace. But creating the first such AIs by modeling human brains is simply not a good or credible idea.
Yes, it’s an unreccomended review of a book. Do glial cells have an important role in the brain? Yes. Do they significantly increase the computational costs of functionally equivalent circuits—absolutely not.
The brain has to handle much more complexity than an AGI brain—the organic brain has to self-assemble out of cells and it has to provide all of it’s own chemical batteries to run the ion pumps. An AGI brain can use an external power supply, so it just needs to focus on the computational aspects.
The most important part of the brain is the cortex. It is built out of a highly repeated simpler circuit that computational neuroscientists have studied extensively and actually understand fairly well—enough to start implementing.
Do we understand everything that circuit does in every brain region all the time? Probably. not.
Most of the remaining missing knowledge is about the higher level connection architecture between regions and interactions with the thalamus, hippocampus and cerebellum.
We don’t necessarily need to understand all of this to build an AGI with a cortex that thinks somewhat like us. We also have general AI theory to guide us.
Whether or not it is a good idea is one question, but it absolutely is a credible idea. In fact, it is the most credible idea for building AGI, but the analysis for that is longer and more complex. I’ve written some about that on my site, I’m going to write up an intro summary of the state of brain-AGI research and why it’s the promising path.
It’s unrecommended because it’s badly written, not because it doesn’t have worthwhile content. The glial cells serve a purpose such that the brain will not produce identical output if you exclude them from the model, and we still don’t have a good understanding of how the interaction works; until recently, we haven’t even paid much attention to studying it.
General AI theory that has so far failed to produce anything close to a general AI.
You’ve already posted arguments to that effect on this site, note that they have tended to be disputed and downvoted.
We don’t yet have economical computer systems that have 10^14 memory capacities and the ability to perform 100-1000 memory/ops on all the memory every second. The world’s largest GPU supercomputers are getting there, but doing it the naive way might take thousands of GPUs, and even then the interconnect is expensive.
We understood the feasibility and general design space of nuclear weapons and space travel long before we had the detailed knowledge and industrial capacity to build such technologies.
11 years (Szilard’s patent in 1934 to Trinity in 1945) is ‘long before’?
Ok, so space travel may be a better example, depending on how far we trace back the idea’s origins. But I do think that we could develop AGI in around a decade if we made an Apollo project out of it (14 year program costing around $170 billion in 2005 dollars).
Perhaps, but as Eliezer has gone to some lengths to point out, the great majority of those working on AGI simply have no concept of how difficult the problem is, of the magnitude of the gulf between their knowledge and what they’d need to solve the problem. And solving some aspects of the problem without solving others can be extraordinarily dangerous. I think you’re handwaving away issues that are dramatically more problematic than you give them credit for.
There is an observational bias involved here. If you do look at the problem of AGI and come to understand it you realize just how difficult it is and you are likely to move to work on a less ambitious narrow-AI precursor. This leaves a much smaller remainder trying to work on AGI, including the bunch that doesn’t understand the difficulty.
If you are talking about the technical issues, I think 1-100 billion and 5-20 years is a good cost estimate.
As for the danger issues, yes of course this will be the most powerful and thus most dangerous invention we ever make. The last, really.