As a clarification of my own worldview on AI and the Singularity, I am basically saying that we are in the early part of the exponential curve, and while AI’s short term effects are overhyped in say 5 years, in 50-100 years AI will change the world so much that it can be called a Singularity.
My biggest evidence comes from the WinoGrande dataset, which achieves 70.3% by GPT-3 without fine-tuning. While BERT went back to random chance-like accuracy, GPT has some common sense (Though worse than human common sense).
Also, GPT-3 can meta-learn languages the first time it receives the data.
And yeah, Vladimir Nesov correctly called my feels on the no fire alarm issue for AI. The problem of exponentials is that in Nesov’s words,
This is useless until it isn’t, and then it’s crucial.
And that’s my theory of why AI progress looks so slow: We’re in the 70s era for AI, and the people using AI are companies.
There is an opposition between skepticism and charity. Charity is not blanket promotion of sloppy reasoning, it’s an insurance policy against misguided stringency in familiar modes of reasoning (exemplified by skepticism). This insurance is costly, it must pay its dues in attention to things that own reasoning finds sloppy or outright nonsense, where it doesn’t agree that its stringency would be misguided. This should still be only a minor portion of the budget.
It contains the damage with compartmentalization, but reserves the option of accepting elements of alien worldviews if they ever grow up, which they sometimes unexpectedly do. The obvious danger in the technique is that you start accepting wrong things because you permitted yourself to think them first. This can be prevented with blanket skepticism (or zealous faith, as the case may be), hence the opposition.
(You are being very sloppy in the arguments. I agree with most of what deepthoughtlife said on object level things, and gwern also made relevant points. The general technique of being skeptical of your own skepticism doesn’t say that skepticism is wrong in any specific case where it should be opposed. Sometimes it’s wrong, but it doesn’t follow in general, that’s why you compartmentalize the skepticism about skepticism, to prevent it from continually damaging your skepticism.)
As a clarification of my own worldview on AI and the Singularity, I am basically saying that we are in the early part of the exponential curve, and while AI’s short term effects are overhyped in say 5 years, in 50-100 years AI will change the world so much that it can be called a Singularity.
My biggest evidence comes from the WinoGrande dataset, which achieves 70.3% by GPT-3 without fine-tuning. While BERT went back to random chance-like accuracy, GPT has some common sense (Though worse than human common sense).
Also, GPT-3 can meta-learn languages the first time it receives the data.
And yeah, Vladimir Nesov correctly called my feels on the no fire alarm issue for AI. The problem of exponentials is that in Nesov’s words,
And that’s my theory of why AI progress looks so slow: We’re in the 70s era for AI, and the people using AI are companies.
There is an opposition between skepticism and charity. Charity is not blanket promotion of sloppy reasoning, it’s an insurance policy against misguided stringency in familiar modes of reasoning (exemplified by skepticism). This insurance is costly, it must pay its dues in attention to things that own reasoning finds sloppy or outright nonsense, where it doesn’t agree that its stringency would be misguided. This should still be only a minor portion of the budget.
It contains the damage with compartmentalization, but reserves the option of accepting elements of alien worldviews if they ever grow up, which they sometimes unexpectedly do. The obvious danger in the technique is that you start accepting wrong things because you permitted yourself to think them first. This can be prevented with blanket skepticism (or zealous faith, as the case may be), hence the opposition.
(You are being very sloppy in the arguments. I agree with most of what deepthoughtlife said on object level things, and gwern also made relevant points. The general technique of being skeptical of your own skepticism doesn’t say that skepticism is wrong in any specific case where it should be opposed. Sometimes it’s wrong, but it doesn’t follow in general, that’s why you compartmentalize the skepticism about skepticism, to prevent it from continually damaging your skepticism.)