>Which is basically this: I notice my inside view, while not confident in this, continues to not expect current methods to be sufficient for AGI, and expects the final form to be more different than I understand Eliezer/MIRI to think it is going to be, and that theAGI problem (not counting alignment, where I think we largely agree on difficulty) is ‘harder’ than Eliezer/MIRI think it is.
Could you share why you think that current methods are not sufficient to produce AGI?
Some context:
After reading Discussion with Eliezer Yudkowsky on AGI interventions I thought about the question “Are current methods sufficient to produce AI?” for a while. I thought I’d check if neural nets are Turing-complete and quick search says they are. To me this looks like a strong clue that we should be able to produce AGI with current methods.
But I remembered reading some people who generally seemed better informed than me having doubts.
I’d like to understand what those doubts are (and why there is apparent disagreement on the subject).
I want to be clear that my inside view is based on less knowledge and less time thinking carefully, and thus has less and less accurate gears, than I would like or than I expect to be true of many others’ here’s models (e.g. Eliezer).
Unpacking my reasoning fully isn’t something I can do in a reply, but if I had to say a few more words, I’d say it’s related to the idea that the AGI will use qualitatively different methods and reasoning, and not thinking that current methods can get there, and that we’re getting our progress out of figuring out how to do more and more things without thinking in this sense, rather than learning how to think in this sense, and also finding out that a lot more of what humans do all day doesn’t require thinking—I felt like GPT-3 taught me a lot about humans and how much they’re on autopilot and how they still get along fine, and I went through an arc where it seemed curious, then scary, then less scary on this front.
I’m emphasizing that this is intuition pumping my inside view, rather than things I endorse or think should persuade anyone, and my focus very much was elsewhere.
Echo the other reply that Turing complete seems like a not-relevant test.
I have only a very vague idea what are different reasoning ways (vaguely related to “fast and effortless “ vs “slow and effortful in humans? I don’t know how that translates into what’s actually going on (rather than how it feels to me)).
Thank you for pointing me to a thing I’d like to understand better.
Turing completeness is definitely the wrong metric for determining whether a method is a path to AGI. My learning algorithm of “generate a random Turing machine, test it on the data, and keep it if it does the best job of all the other Turing machines I’ve generated, repeat” is clearly Turing complete, and will eventually learn any computable process, but it’s very inefficient, and we shouldn’t expect AGI to be generated using that algorithm anytime in the near future.
Similarly, neural networks with one hidden layer are universal function approximators, and yet modern methods use very deep neural networks with lots of internal structure (convolutions, recurrences) because they learn faster, even though a single hidden layer is enough in theory to achieve the same tasks.
I was thinking that current methods could produce AGI (because Turing-complete) and they can apparently good at producing some algorithms so they might be reasonably good at producing AGI.
2nd part of that wasn’t explicit for me before your answer so thank you :)
>Which is basically this: I notice my inside view, while not confident in this, continues to not expect current methods to be sufficient for AGI, and expects the final form to be more different than I understand Eliezer/MIRI to think it is going to be, and that theAGI problem (not counting alignment, where I think we largely agree on difficulty) is ‘harder’ than Eliezer/MIRI think it is.
Could you share why you think that current methods are not sufficient to produce AGI?
Some context:
After reading Discussion with Eliezer Yudkowsky on AGI interventions I thought about the question “Are current methods sufficient to produce AI?” for a while. I thought I’d check if neural nets are Turing-complete and quick search says they are. To me this looks like a strong clue that we should be able to produce AGI with current methods.
But I remembered reading some people who generally seemed better informed than me having doubts.
I’d like to understand what those doubts are (and why there is apparent disagreement on the subject).
I want to be clear that my inside view is based on less knowledge and less time thinking carefully, and thus has less and less accurate gears, than I would like or than I expect to be true of many others’ here’s models (e.g. Eliezer).
Unpacking my reasoning fully isn’t something I can do in a reply, but if I had to say a few more words, I’d say it’s related to the idea that the AGI will use qualitatively different methods and reasoning, and not thinking that current methods can get there, and that we’re getting our progress out of figuring out how to do more and more things without thinking in this sense, rather than learning how to think in this sense, and also finding out that a lot more of what humans do all day doesn’t require thinking—I felt like GPT-3 taught me a lot about humans and how much they’re on autopilot and how they still get along fine, and I went through an arc where it seemed curious, then scary, then less scary on this front.
I’m emphasizing that this is intuition pumping my inside view, rather than things I endorse or think should persuade anyone, and my focus very much was elsewhere.
Echo the other reply that Turing complete seems like a not-relevant test.
I agree that GPT-3 sounds like a person on autopilot.
As Sarah Constantin said: Humans Who Are Not Concentrating Are Not General Intelligences
I have only a very vague idea what are different reasoning ways (vaguely related to “fast and effortless “ vs “slow and effortful in humans? I don’t know how that translates into what’s actually going on (rather than how it feels to me)).
Thank you for pointing me to a thing I’d like to understand better.
Turing completeness is definitely the wrong metric for determining whether a method is a path to AGI. My learning algorithm of “generate a random Turing machine, test it on the data, and keep it if it does the best job of all the other Turing machines I’ve generated, repeat” is clearly Turing complete, and will eventually learn any computable process, but it’s very inefficient, and we shouldn’t expect AGI to be generated using that algorithm anytime in the near future.
Similarly, neural networks with one hidden layer are universal function approximators, and yet modern methods use very deep neural networks with lots of internal structure (convolutions, recurrences) because they learn faster, even though a single hidden layer is enough in theory to achieve the same tasks.
I was thinking that current methods could produce AGI (because Turing-complete) and they can apparently good at producing some algorithms so they might be reasonably good at producing AGI.
2nd part of that wasn’t explicit for me before your answer so thank you :)