How much would it cost and how useful would an upload be?
What you are saying is “copy the spiking neural network architectures from a sufficient number of deceased high intelligence individuals”, then “in a training process optimize the spiking neural network design to it’s local minima”, then have some kind of “committee” of uploaded beings and error checking steps in some kind of pipeline so that a single uploaded individual can’t turn the planet into a dictatorship.
And once you really look at what kind of pipeline you would need to control these ASIs derived from deceased humans you realize....why did you need to start with humans at all?
Why not pick any neural network type that works—found by starting with the simplest network possible (see perceptrons and MLPs) and adding complexity until it works—and then pick the simplest cognitive architecture that works instead of the mess of interconnected systems the brain uses? Like fundamentally why is “spaghetti ” more alignable than “network A generates a candidate output and network B checks for hostile language and network C checks for sabotaged code and network D checks for....” And then it’s crucial to ensure A can’t coordinate with (B, C, D...) to betray and leak unaligned outputs. This means you need very strong Isolation where A cannot communicate with the “checker” networks or manipulate their weights. Human brain is a mess of interconnects and indirect signaling, it is exactly the wrong architecture for generating clean, likely to be aligned outputs. See motivated cognition where a human does something irrational despite the human knowing the risks or probable outcome.
You also have the practical advantage with conventional AI research that it’s much cheaper and faster to show results, which it has. Uploads require emulating most of the brain and a body as well.
And conventional AI will likely always be faster and more efficient. Compare a jet engine to a flapping bird...
Or “what is the probability that nature found the most efficient possible neural network architecture during evolution”?
Its that I and many others would identify with WBE and such a group of WBE much more than the more pure AI. If the WBE behaves like a human then it is aligned by definition to me.
If we believe AI is extreme power, we already have too much power, its all about making something we identify with.
I understand that. But inaccuracies in emulation, the effectively thousands of years (or millions) of lived experience a WBE will have. Neural patches and enhancements to improve performance.
You have built an ASI, just you have narrowed your architecture search from “any possible network the underlying compute can efficiently host” to a fairly narrow space of spaghetti messes of spiking neural networks that also have forms of side channel communications through various emulated glands and a “global” model for csf and blood chemistry.
So it’s an underperforming ASI but still hazardous.
What’s preventing them from massive investments into WBE/upload? Many AI/tech leaders who think the MIRI view is wrong would also support that.
How much would it cost and how useful would an upload be?
What you are saying is “copy the spiking neural network architectures from a sufficient number of deceased high intelligence individuals”, then “in a training process optimize the spiking neural network design to it’s local minima”, then have some kind of “committee” of uploaded beings and error checking steps in some kind of pipeline so that a single uploaded individual can’t turn the planet into a dictatorship.
And once you really look at what kind of pipeline you would need to control these ASIs derived from deceased humans you realize....why did you need to start with humans at all?
Why not pick any neural network type that works—found by starting with the simplest network possible (see perceptrons and MLPs) and adding complexity until it works—and then pick the simplest cognitive architecture that works instead of the mess of interconnected systems the brain uses? Like fundamentally why is “spaghetti ” more alignable than “network A generates a candidate output and network B checks for hostile language and network C checks for sabotaged code and network D checks for....” And then it’s crucial to ensure A can’t coordinate with (B, C, D...) to betray and leak unaligned outputs. This means you need very strong Isolation where A cannot communicate with the “checker” networks or manipulate their weights. Human brain is a mess of interconnects and indirect signaling, it is exactly the wrong architecture for generating clean, likely to be aligned outputs. See motivated cognition where a human does something irrational despite the human knowing the risks or probable outcome.
You also have the practical advantage with conventional AI research that it’s much cheaper and faster to show results, which it has. Uploads require emulating most of the brain and a body as well.
And conventional AI will likely always be faster and more efficient. Compare a jet engine to a flapping bird...
Or “what is the probability that nature found the most efficient possible neural network architecture during evolution”?
Its that I and many others would identify with WBE and such a group of WBE much more than the more pure AI. If the WBE behaves like a human then it is aligned by definition to me.
If we believe AI is extreme power, we already have too much power, its all about making something we identify with.
I understand that. But inaccuracies in emulation, the effectively thousands of years (or millions) of lived experience a WBE will have. Neural patches and enhancements to improve performance.
You have built an ASI, just you have narrowed your architecture search from “any possible network the underlying compute can efficiently host” to a fairly narrow space of spaghetti messes of spiking neural networks that also have forms of side channel communications through various emulated glands and a “global” model for csf and blood chemistry.
So it’s an underperforming ASI but still hazardous.