I want to be clear that my inside view is based on less knowledge and less time thinking carefully, and thus has less and less accurate gears, than I would like or than I expect to be true of many others’ here’s models (e.g. Eliezer).
Unpacking my reasoning fully isn’t something I can do in a reply, but if I had to say a few more words, I’d say it’s related to the idea that the AGI will use qualitatively different methods and reasoning, and not thinking that current methods can get there, and that we’re getting our progress out of figuring out how to do more and more things without thinking in this sense, rather than learning how to think in this sense, and also finding out that a lot more of what humans do all day doesn’t require thinking—I felt like GPT-3 taught me a lot about humans and how much they’re on autopilot and how they still get along fine, and I went through an arc where it seemed curious, then scary, then less scary on this front.
I’m emphasizing that this is intuition pumping my inside view, rather than things I endorse or think should persuade anyone, and my focus very much was elsewhere.
Echo the other reply that Turing complete seems like a not-relevant test.
I have only a very vague idea what are different reasoning ways (vaguely related to “fast and effortless “ vs “slow and effortful in humans? I don’t know how that translates into what’s actually going on (rather than how it feels to me)).
Thank you for pointing me to a thing I’d like to understand better.
I want to be clear that my inside view is based on less knowledge and less time thinking carefully, and thus has less and less accurate gears, than I would like or than I expect to be true of many others’ here’s models (e.g. Eliezer).
Unpacking my reasoning fully isn’t something I can do in a reply, but if I had to say a few more words, I’d say it’s related to the idea that the AGI will use qualitatively different methods and reasoning, and not thinking that current methods can get there, and that we’re getting our progress out of figuring out how to do more and more things without thinking in this sense, rather than learning how to think in this sense, and also finding out that a lot more of what humans do all day doesn’t require thinking—I felt like GPT-3 taught me a lot about humans and how much they’re on autopilot and how they still get along fine, and I went through an arc where it seemed curious, then scary, then less scary on this front.
I’m emphasizing that this is intuition pumping my inside view, rather than things I endorse or think should persuade anyone, and my focus very much was elsewhere.
Echo the other reply that Turing complete seems like a not-relevant test.
I agree that GPT-3 sounds like a person on autopilot.
As Sarah Constantin said: Humans Who Are Not Concentrating Are Not General Intelligences
I have only a very vague idea what are different reasoning ways (vaguely related to “fast and effortless “ vs “slow and effortful in humans? I don’t know how that translates into what’s actually going on (rather than how it feels to me)).
Thank you for pointing me to a thing I’d like to understand better.