If you know how to make an AGI, you are only a little bit of coding before making it. We have limited AI’s that can do some things, and aren’t clear what we are missing. Experts are inventing all sorts of algorithms.
There are various approaches like mind uploading, evolutionary algorithms etc that fairly clearly would work if we threw enough effort at them. Current reinforcement learning approaches seem like they might get smart, with enough compute and the right environment.
Unless you personally end up helping make the first AGI, then you personally will probably not be able to see how to do it until after it is done (if at all). The fact that you personally can’t think of any path to AGI does not tell us where we are on the tech path. Someone else might be putting the finishing touches on their AI right now. Once you know how to do it, you’ve done it.
FWIW, I think that mind uploading is much less likely to work than a purely synthetic AI, at least in reasonably near-term scenarios. I have never read any description of how mind uploading is going to work which doesn’t begin by assuming that the hard part (capturing all of the necessary state from an existing mind) is already done.
Of course mind uploading would work hypothetically. The question is, how much of the mind must be uploaded? A directed graph and an update rule? Or an atomic-level simulation of the entire human body? The same principle applies to evolutionary algorithms, reinforcement learning (not the DL sort imo tho, it’s a dead end), etc. I actually don’t think it would be impossible to at least get a decent lower bound on the complexity needed by each of these approaches. Do the AI safety people do anything like this? That would be a paper I’d like to read.
I don’t know whether to respond to the “Once you know how to do it, you’ve done it” bit. Should I claim that this is not the case in other fields? Or will AI be “different”? What is the standard under which this statement could be falsified?
For the goal of getting humans to mars, we can do the calculations and see that we need quite a bit of rocket fuel. You could reasonably be in a situation where you had all the design work done, but you still needed to get atoms into the right places, and that took a while. Big infrastructure projects can be easier to design. For a giant damm, most of the effort is in actually getting all the raw materials in place. This means you can know what it takes to build a damm, and be confident it will take at least 5 years given the current rate of concrete production.
Mathematics is near the other end of the scale. If you know how to prove theorem X, you’ve proved it. This stops us being confident that a theorem won’t be proved soon. Its more like a radioactive decay of an fairly long lived atom more likely to be next week than any other week.
I think AI is fairly close to the maths, most of the effort is figuring out what to do.
Ways my statement could be false.
If we knew the algorithm, and the compute needed, but couldn’t get that compute.
If AI development was an accumulation of many little tricks, and we knew how many tricks were needed.
But at the moment, I think we can rule out confident long termism on AI. We have no way of knowing that we aren’t just one clever idea away from AGI.
We can get a rough idea of this by considering how much physical changes have a mental effect. Psychoactive chemicals, brain damage etc. Look at how much ethanol changes the behaviour of a single neuron in a lab dish. How much it changes human behaviour. And that gives a rough indication of how sensitively dependant human behaviour is on the exact behaviour of its constituent neurons.
I think this post sums up the situation.
https://www.lesswrong.com/posts/BEtzRE2M5m9YEAQpX/there-s-no-fire-alarm-for-artificial-general-intelligence
If you know how to make an AGI, you are only a little bit of coding before making it. We have limited AI’s that can do some things, and aren’t clear what we are missing. Experts are inventing all sorts of algorithms.
There are various approaches like mind uploading, evolutionary algorithms etc that fairly clearly would work if we threw enough effort at them. Current reinforcement learning approaches seem like they might get smart, with enough compute and the right environment.
Unless you personally end up helping make the first AGI, then you personally will probably not be able to see how to do it until after it is done (if at all). The fact that you personally can’t think of any path to AGI does not tell us where we are on the tech path. Someone else might be putting the finishing touches on their AI right now. Once you know how to do it, you’ve done it.
FWIW, I think that mind uploading is much less likely to work than a purely synthetic AI, at least in reasonably near-term scenarios. I have never read any description of how mind uploading is going to work which doesn’t begin by assuming that the hard part (capturing all of the necessary state from an existing mind) is already done.
I agree that purely synthetic AI will probably happen sooner.
Of course mind uploading would work hypothetically. The question is, how much of the mind must be uploaded? A directed graph and an update rule? Or an atomic-level simulation of the entire human body? The same principle applies to evolutionary algorithms, reinforcement learning (not the DL sort imo tho, it’s a dead end), etc. I actually don’t think it would be impossible to at least get a decent lower bound on the complexity needed by each of these approaches. Do the AI safety people do anything like this? That would be a paper I’d like to read.
I don’t know whether to respond to the “Once you know how to do it, you’ve done it” bit. Should I claim that this is not the case in other fields? Or will AI be “different”? What is the standard under which this statement could be falsified?
For the goal of getting humans to mars, we can do the calculations and see that we need quite a bit of rocket fuel. You could reasonably be in a situation where you had all the design work done, but you still needed to get atoms into the right places, and that took a while. Big infrastructure projects can be easier to design. For a giant damm, most of the effort is in actually getting all the raw materials in place. This means you can know what it takes to build a damm, and be confident it will take at least 5 years given the current rate of concrete production.
Mathematics is near the other end of the scale. If you know how to prove theorem X, you’ve proved it. This stops us being confident that a theorem won’t be proved soon. Its more like a radioactive decay of an fairly long lived atom more likely to be next week than any other week.
I think AI is fairly close to the maths, most of the effort is figuring out what to do.
Ways my statement could be false.
If we knew the algorithm, and the compute needed, but couldn’t get that compute.
If AI development was an accumulation of many little tricks, and we knew how many tricks were needed.
But at the moment, I think we can rule out confident long termism on AI. We have no way of knowing that we aren’t just one clever idea away from AGI.
The question is not just “how much is needed” but also “what’s a reasonable difference between the new digital mind and the biological sustance”.
We can get a rough idea of this by considering how much physical changes have a mental effect. Psychoactive chemicals, brain damage etc. Look at how much ethanol changes the behaviour of a single neuron in a lab dish. How much it changes human behaviour. And that gives a rough indication of how sensitively dependant human behaviour is on the exact behaviour of its constituent neurons.