It’s actually worse than that. Humans do not scale well to more computing power. A good AI could expand the depth of its search trees, in principle, logarithmically with compute power (possibly a bit better with monte-carlo approaches). If you throw an AI ten times more processing power, it could, at the bare minimum, extend the depth or detail of its planning several times. The same is not true of human neurology. All an em can do with more processing power is run faster, which has limited value. A human can do things a chimp just can’t, even if the chimp has a really long time to think about it. The human brain was not designed to scale with processing power, to run on a linear computer, or to be modular and improveable. De novo AI is just (probably) going to run circles around us.
Even if you gave me a whole bunch of nanobots that could rewire my brain any way I wanted, I would have no clue how to do that. I’m not sure the modern establishment of neurology has any good idea of how you’d do that. I know for sure that nobody on Earth knows how to do that in a safe way that is guaranteed not to cause psychosis, seizures, or other glitches down the line. It’s going to take serious, in depth, and expensive research to figure out how to make this changes in a sane way.
Also, it can even be that you cannot rewire your existing brain while keeping all its current functionality and not increasing its size.
But I look at evidence about learning (including learning to see using photoelements stimulating non-visual neurons and learning of new motor skills). Also, it looks like selection by brain size went quite efficiently during human evolution and we want just to shift the equilibrium. I do think that building an upload at all would require good enough understanding of cortex structure that you would be able to increase neuron count and then learn to use the improved brain using the normal learning methods.
I think you’re being unreasonably skeptical here. We know the human biological brain, as hugely limited by biology and evolution as it is, can grow new neurons and increase regions and shrink others; and that this is true even as adults (past the extreme learning of childhood). Artificial neural networks can be created with many differing amounts of neurons, limited mostly by computing power. Why would you assume that an em would simply be a static human brain, but faster? What stops it from regularly firing up neurons, growing new connections, and expanding to a ‘bigger’ brain than could ever be biologically supported or even useful due to biological signalling limits?
Growing new neurons at extremely accelerated rates IS a process known to happen in adults: we normally call it brain cancer.
That’s obviously a little spurious, but it is a good indication that making the brain more intelligent is not trivial. I don’t doubt that it is possible to bootstrap an em up to higher intelligence, but figuring out how to do that while preserving personal identity and not causing insanity, seizures, neurogenesis-related noise, or other undesirable effects, is probably going to take a long time. I think Eliezer was on the right track by describing ems bootstrapping as being ‘a desperate race between how smart you are and how crazy you are.’ The human brain evolved to work under a fairly narrow design spec. When you change any part of it in a dramatic fashion, all the normal regulatory mechanisms are no longer guaranteed or likely to work.
De novo AI, by virtue of an (almost certainly) simpler underlying algorithm, has none of these issues. Expanding to use new computational resources is likely to be a matter of tweaking parameters in a mathematical function that could fit on a T-shirt if you printed small enough. They’ll always have a huge advantage, in that they were designed for this, and we definitely were not.
That’s obviously a little spurious, but it is a good indication that making the brain more intelligent is not trivial.
No, it is trivial, we do it all the time as I already said: it’s called ‘learning’. With much learning, brain regions change size; what do you think is going on there?
I think Eliezer was on the right track by describing ems bootstrapping as being ‘a desperate race between how smart you are and how crazy you are.’ The human brain evolved to work under a fairly narrow design spec. When you change any part of it in a dramatic fashion, all the normal regulatory mechanisms are no longer guaranteed or likely to work.
If you want to bootstrap as fast as possible, sure.
No, it is trivial, we do it all the time as I already said: it’s called ‘learning’. With much learning, brain regions change size; what do you think is going on there?
Oh, definitely, the brain is capable of neurogenesis (to degrees that are a function of age) -- but you’ll notice that learning new things do not cause the brain to increase in intelligence dramatically. There are a number of core brain regions that seem pretty thoroughly hardwired. And, again, if you want to tweak things outside of normal ranges, you’re definitely voiding the warranty. The whole thing might and likely will break for no obvious reason unless you do it just exactly right. That takes a lot of time, and is not guaranteed to be efficient.
If you want to bootstrap as fast as possible, sure.
If we’re in an intellectual arms race against de novo uFAI, I’d say yes, we do. And we’re probably going to lose.
It’s actually worse than that. Humans do not scale well to more computing power. A good AI could expand the depth of its search trees, in principle, logarithmically with compute power (possibly a bit better with monte-carlo approaches). If you throw an AI ten times more processing power, it could, at the bare minimum, extend the depth or detail of its planning several times. The same is not true of human neurology. All an em can do with more processing power is run faster, which has limited value. A human can do things a chimp just can’t, even if the chimp has a really long time to think about it. The human brain was not designed to scale with processing power, to run on a linear computer, or to be modular and improveable. De novo AI is just (probably) going to run circles around us.
A good upload could increase its short-term working memory capacity for distinct objects to match more complex pattern.
Okay, sure, but here’s the hitch:
Even if you gave me a whole bunch of nanobots that could rewire my brain any way I wanted, I would have no clue how to do that. I’m not sure the modern establishment of neurology has any good idea of how you’d do that. I know for sure that nobody on Earth knows how to do that in a safe way that is guaranteed not to cause psychosis, seizures, or other glitches down the line. It’s going to take serious, in depth, and expensive research to figure out how to make this changes in a sane way.
Everything you said is true.
Also, it can even be that you cannot rewire your existing brain while keeping all its current functionality and not increasing its size.
But I look at evidence about learning (including learning to see using photoelements stimulating non-visual neurons and learning of new motor skills). Also, it looks like selection by brain size went quite efficiently during human evolution and we want just to shift the equilibrium. I do think that building an upload at all would require good enough understanding of cortex structure that you would be able to increase neuron count and then learn to use the improved brain using the normal learning methods.
I think you’re being unreasonably skeptical here. We know the human biological brain, as hugely limited by biology and evolution as it is, can grow new neurons and increase regions and shrink others; and that this is true even as adults (past the extreme learning of childhood). Artificial neural networks can be created with many differing amounts of neurons, limited mostly by computing power. Why would you assume that an em would simply be a static human brain, but faster? What stops it from regularly firing up neurons, growing new connections, and expanding to a ‘bigger’ brain than could ever be biologically supported or even useful due to biological signalling limits?
Growing new neurons at extremely accelerated rates IS a process known to happen in adults: we normally call it brain cancer.
That’s obviously a little spurious, but it is a good indication that making the brain more intelligent is not trivial. I don’t doubt that it is possible to bootstrap an em up to higher intelligence, but figuring out how to do that while preserving personal identity and not causing insanity, seizures, neurogenesis-related noise, or other undesirable effects, is probably going to take a long time. I think Eliezer was on the right track by describing ems bootstrapping as being ‘a desperate race between how smart you are and how crazy you are.’ The human brain evolved to work under a fairly narrow design spec. When you change any part of it in a dramatic fashion, all the normal regulatory mechanisms are no longer guaranteed or likely to work.
De novo AI, by virtue of an (almost certainly) simpler underlying algorithm, has none of these issues. Expanding to use new computational resources is likely to be a matter of tweaking parameters in a mathematical function that could fit on a T-shirt if you printed small enough. They’ll always have a huge advantage, in that they were designed for this, and we definitely were not.
No, it is trivial, we do it all the time as I already said: it’s called ‘learning’. With much learning, brain regions change size; what do you think is going on there?
If you want to bootstrap as fast as possible, sure.
Oh, definitely, the brain is capable of neurogenesis (to degrees that are a function of age) -- but you’ll notice that learning new things do not cause the brain to increase in intelligence dramatically. There are a number of core brain regions that seem pretty thoroughly hardwired. And, again, if you want to tweak things outside of normal ranges, you’re definitely voiding the warranty. The whole thing might and likely will break for no obvious reason unless you do it just exactly right. That takes a lot of time, and is not guaranteed to be efficient.
If we’re in an intellectual arms race against de novo uFAI, I’d say yes, we do. And we’re probably going to lose.