Of course mind uploading would work hypothetically. The question is, how much of the mind must be uploaded? A directed graph and an update rule? Or an atomic-level simulation of the entire human body? The same principle applies to evolutionary algorithms, reinforcement learning (not the DL sort imo tho, it’s a dead end), etc. I actually don’t think it would be impossible to at least get a decent lower bound on the complexity needed by each of these approaches. Do the AI safety people do anything like this? That would be a paper I’d like to read.
I don’t know whether to respond to the “Once you know how to do it, you’ve done it” bit. Should I claim that this is not the case in other fields? Or will AI be “different”? What is the standard under which this statement could be falsified?
For the goal of getting humans to mars, we can do the calculations and see that we need quite a bit of rocket fuel. You could reasonably be in a situation where you had all the design work done, but you still needed to get atoms into the right places, and that took a while. Big infrastructure projects can be easier to design. For a giant damm, most of the effort is in actually getting all the raw materials in place. This means you can know what it takes to build a damm, and be confident it will take at least 5 years given the current rate of concrete production.
Mathematics is near the other end of the scale. If you know how to prove theorem X, you’ve proved it. This stops us being confident that a theorem won’t be proved soon. Its more like a radioactive decay of an fairly long lived atom more likely to be next week than any other week.
I think AI is fairly close to the maths, most of the effort is figuring out what to do.
Ways my statement could be false.
If we knew the algorithm, and the compute needed, but couldn’t get that compute.
If AI development was an accumulation of many little tricks, and we knew how many tricks were needed.
But at the moment, I think we can rule out confident long termism on AI. We have no way of knowing that we aren’t just one clever idea away from AGI.
We can get a rough idea of this by considering how much physical changes have a mental effect. Psychoactive chemicals, brain damage etc. Look at how much ethanol changes the behaviour of a single neuron in a lab dish. How much it changes human behaviour. And that gives a rough indication of how sensitively dependant human behaviour is on the exact behaviour of its constituent neurons.
Of course mind uploading would work hypothetically. The question is, how much of the mind must be uploaded? A directed graph and an update rule? Or an atomic-level simulation of the entire human body? The same principle applies to evolutionary algorithms, reinforcement learning (not the DL sort imo tho, it’s a dead end), etc. I actually don’t think it would be impossible to at least get a decent lower bound on the complexity needed by each of these approaches. Do the AI safety people do anything like this? That would be a paper I’d like to read.
I don’t know whether to respond to the “Once you know how to do it, you’ve done it” bit. Should I claim that this is not the case in other fields? Or will AI be “different”? What is the standard under which this statement could be falsified?
For the goal of getting humans to mars, we can do the calculations and see that we need quite a bit of rocket fuel. You could reasonably be in a situation where you had all the design work done, but you still needed to get atoms into the right places, and that took a while. Big infrastructure projects can be easier to design. For a giant damm, most of the effort is in actually getting all the raw materials in place. This means you can know what it takes to build a damm, and be confident it will take at least 5 years given the current rate of concrete production.
Mathematics is near the other end of the scale. If you know how to prove theorem X, you’ve proved it. This stops us being confident that a theorem won’t be proved soon. Its more like a radioactive decay of an fairly long lived atom more likely to be next week than any other week.
I think AI is fairly close to the maths, most of the effort is figuring out what to do.
Ways my statement could be false.
If we knew the algorithm, and the compute needed, but couldn’t get that compute.
If AI development was an accumulation of many little tricks, and we knew how many tricks were needed.
But at the moment, I think we can rule out confident long termism on AI. We have no way of knowing that we aren’t just one clever idea away from AGI.
The question is not just “how much is needed” but also “what’s a reasonable difference between the new digital mind and the biological sustance”.
We can get a rough idea of this by considering how much physical changes have a mental effect. Psychoactive chemicals, brain damage etc. Look at how much ethanol changes the behaviour of a single neuron in a lab dish. How much it changes human behaviour. And that gives a rough indication of how sensitively dependant human behaviour is on the exact behaviour of its constituent neurons.