What’s preventing MIRI from making massive investments into human intelligence augmentation? If I recall correctly, MIRI is most constrained on research ideas, but human intelligence augmentation is a huge research idea that other grantmakers, for whatever reason, aren’t funding. There are plenty of shovel-ready proposals already, e.g. https://www.lesswrong.com/posts/JEhW3HDMKzekDShva/significantly-enhancing-adult-intelligence-with-gene-editing; why doesn’t MIRI fund them?
How much would it cost and how useful would an upload be?
What you are saying is “copy the spiking neural network architectures from a sufficient number of deceased high intelligence individuals”, then “in a training process optimize the spiking neural network design to it’s local minima”, then have some kind of “committee” of uploaded beings and error checking steps in some kind of pipeline so that a single uploaded individual can’t turn the planet into a dictatorship.
And once you really look at what kind of pipeline you would need to control these ASIs derived from deceased humans you realize....why did you need to start with humans at all?
Why not pick any neural network type that works—found by starting with the simplest network possible (see perceptrons and MLPs) and adding complexity until it works—and then pick the simplest cognitive architecture that works instead of the mess of interconnected systems the brain uses? Like fundamentally why is “spaghetti ” more alignable than “network A generates a candidate output and network B checks for hostile language and network C checks for sabotaged code and network D checks for....” And then it’s crucial to ensure A can’t coordinate with (B, C, D...) to betray and leak unaligned outputs. This means you need very strong Isolation where A cannot communicate with the “checker” networks or manipulate their weights. Human brain is a mess of interconnects and indirect signaling, it is exactly the wrong architecture for generating clean, likely to be aligned outputs. See motivated cognition where a human does something irrational despite the human knowing the risks or probable outcome.
You also have the practical advantage with conventional AI research that it’s much cheaper and faster to show results, which it has. Uploads require emulating most of the brain and a body as well.
And conventional AI will likely always be faster and more efficient. Compare a jet engine to a flapping bird...
Or “what is the probability that nature found the most efficient possible neural network architecture during evolution”?
Its that I and many others would identify with WBE and such a group of WBE much more than the more pure AI. If the WBE behaves like a human then it is aligned by definition to me.
If we believe AI is extreme power, we already have too much power, its all about making something we identify with.
I understand that. But inaccuracies in emulation, the effectively thousands of years (or millions) of lived experience a WBE will have. Neural patches and enhancements to improve performance.
You have built an ASI, just you have narrowed your architecture search from “any possible network the underlying compute can efficiently host” to a fairly narrow space of spaghetti messes of spiking neural networks that also have forms of side channel communications through various emulated glands and a “global” model for csf and blood chemistry.
So it’s an underperforming ASI but still hazardous.
Human intelligence augmentation is feasible over a scale of decades to generations, given iterated polygenic embryo selection.
I don’t see any feasible way that gene editing or ‘mind uploading’ could work within the next few decades. Gene editing for intelligence seems unfeasible because human intelligence is a massively polygenic trait, influenced by thousands to tens of thousands of quantitative trait loci. Gene editing can fix major mutations, to nudge IQ back up to normal levels, but we don’t know of any single genes that can boost IQ above the normal range. And ‘mind uploading’ would require extremely fine-grained brain scanning that we simply don’t have now.
Bottom line is, human intelligence augmentation would happen way too slowly to be able to compete with ASI development.
If we want safe AI, we have to slow AI development. There’s no other way.
Gene editing can fix major mutations, to nudge IQ back up to normal levels, but we don’t know of any single genes that can boost IQ above the normal range
This is not true. We know of enough IQ variants TODAY to raise it by about 30 points in embryos (and probably much less in adults). But we could fix that by simply collecting more data from people who have already been genotyped.
None of them individually have a huge effect, but that doesn’t matter much. It just means you need to perform more edits.
If we want safe AI, we have to slow AI development.
I don’t see any feasible way that gene editing or ‘mind uploading’ could work within the next few decades. Gene editing for intelligence seems unfeasible because human intelligence is a massively polygenic trait, influenced by thousands to tens of thousands of quantitative trait loci.
I think the authors in the post referenced above agree with this premise and still consider human intelligence augmentation via polygenic editing to be feasible within the next few decades! I think their technical claims hold up, so personally I’d be very excited to see MIRI pivot towards supporting their general direction. I’d be interested to hear your opinions on their post.
What’s preventing MIRI from making massive investments into human intelligence augmentation? If I recall correctly, MIRI is most constrained on research ideas, but human intelligence augmentation is a huge research idea that other grantmakers, for whatever reason, aren’t funding. There are plenty of shovel-ready proposals already, e.g. https://www.lesswrong.com/posts/JEhW3HDMKzekDShva/significantly-enhancing-adult-intelligence-with-gene-editing; why doesn’t MIRI fund them?
What’s preventing them from massive investments into WBE/upload? Many AI/tech leaders who think the MIRI view is wrong would also support that.
How much would it cost and how useful would an upload be?
What you are saying is “copy the spiking neural network architectures from a sufficient number of deceased high intelligence individuals”, then “in a training process optimize the spiking neural network design to it’s local minima”, then have some kind of “committee” of uploaded beings and error checking steps in some kind of pipeline so that a single uploaded individual can’t turn the planet into a dictatorship.
And once you really look at what kind of pipeline you would need to control these ASIs derived from deceased humans you realize....why did you need to start with humans at all?
Why not pick any neural network type that works—found by starting with the simplest network possible (see perceptrons and MLPs) and adding complexity until it works—and then pick the simplest cognitive architecture that works instead of the mess of interconnected systems the brain uses? Like fundamentally why is “spaghetti ” more alignable than “network A generates a candidate output and network B checks for hostile language and network C checks for sabotaged code and network D checks for....” And then it’s crucial to ensure A can’t coordinate with (B, C, D...) to betray and leak unaligned outputs. This means you need very strong Isolation where A cannot communicate with the “checker” networks or manipulate their weights. Human brain is a mess of interconnects and indirect signaling, it is exactly the wrong architecture for generating clean, likely to be aligned outputs. See motivated cognition where a human does something irrational despite the human knowing the risks or probable outcome.
You also have the practical advantage with conventional AI research that it’s much cheaper and faster to show results, which it has. Uploads require emulating most of the brain and a body as well.
And conventional AI will likely always be faster and more efficient. Compare a jet engine to a flapping bird...
Or “what is the probability that nature found the most efficient possible neural network architecture during evolution”?
Its that I and many others would identify with WBE and such a group of WBE much more than the more pure AI. If the WBE behaves like a human then it is aligned by definition to me.
If we believe AI is extreme power, we already have too much power, its all about making something we identify with.
I understand that. But inaccuracies in emulation, the effectively thousands of years (or millions) of lived experience a WBE will have. Neural patches and enhancements to improve performance.
You have built an ASI, just you have narrowed your architecture search from “any possible network the underlying compute can efficiently host” to a fairly narrow space of spaghetti messes of spiking neural networks that also have forms of side channel communications through various emulated glands and a “global” model for csf and blood chemistry.
So it’s an underperforming ASI but still hazardous.
Human intelligence augmentation is feasible over a scale of decades to generations, given iterated polygenic embryo selection.
I don’t see any feasible way that gene editing or ‘mind uploading’ could work within the next few decades. Gene editing for intelligence seems unfeasible because human intelligence is a massively polygenic trait, influenced by thousands to tens of thousands of quantitative trait loci. Gene editing can fix major mutations, to nudge IQ back up to normal levels, but we don’t know of any single genes that can boost IQ above the normal range. And ‘mind uploading’ would require extremely fine-grained brain scanning that we simply don’t have now.
Bottom line is, human intelligence augmentation would happen way too slowly to be able to compete with ASI development.
If we want safe AI, we have to slow AI development. There’s no other way.
This is not true. We know of enough IQ variants TODAY to raise it by about 30 points in embryos (and probably much less in adults). But we could fix that by simply collecting more data from people who have already been genotyped.
None of them individually have a huge effect, but that doesn’t matter much. It just means you need to perform more edits.
I agree this would help a lot.
EDIT: added a graph
I think the authors in the post referenced above agree with this premise and still consider human intelligence augmentation via polygenic editing to be feasible within the next few decades! I think their technical claims hold up, so personally I’d be very excited to see MIRI pivot towards supporting their general direction. I’d be interested to hear your opinions on their post.