which is entirely based on a clearly false claim of fact
The failure mode is (for example) systematic inability to understand a sufficiently alien worldview that keeps relying on false claims of fact. The problem is attention being trained to avoid things that are branded with falsity, bias, and fallacy, even by association. One operationalization of a solution is Ideological Turing Test, diligently training to perchance eventually succeed in pretending to be the evil peddler of bunkum. Not very appealing.
As a clarification of my own worldview on AI and the Singularity, I am basically saying that we are in the early part of the exponential curve, and while AI’s short term effects are overhyped in say 5 years, in 50-100 years AI will change the world so much that it can be called a Singularity.
My biggest evidence comes from the WinoGrande dataset, which achieves 70.3% by GPT-3 without fine-tuning. While BERT went back to random chance-like accuracy, GPT has some common sense (Though worse than human common sense).
Also, GPT-3 can meta-learn languages the first time it receives the data.
And yeah, Vladimir Nesov correctly called my feels on the no fire alarm issue for AI. The problem of exponentials is that in Nesov’s words,
This is useless until it isn’t, and then it’s crucial.
And that’s my theory of why AI progress looks so slow: We’re in the 70s era for AI, and the people using AI are companies.
There is an opposition between skepticism and charity. Charity is not blanket promotion of sloppy reasoning, it’s an insurance policy against misguided stringency in familiar modes of reasoning (exemplified by skepticism). This insurance is costly, it must pay its dues in attention to things that own reasoning finds sloppy or outright nonsense, where it doesn’t agree that its stringency would be misguided. This should still be only a minor portion of the budget.
It contains the damage with compartmentalization, but reserves the option of accepting elements of alien worldviews if they ever grow up, which they sometimes unexpectedly do. The obvious danger in the technique is that you start accepting wrong things because you permitted yourself to think them first. This can be prevented with blanket skepticism (or zealous faith, as the case may be), hence the opposition.
(You are being very sloppy in the arguments. I agree with most of what deepthoughtlife said on object level things, and gwern also made relevant points. The general technique of being skeptical of your own skepticism doesn’t say that skepticism is wrong in any specific case where it should be opposed. Sometimes it’s wrong, but it doesn’t follow in general, that’s why you compartmentalize the skepticism about skepticism, to prevent it from continually damaging your skepticism.)
I’m honestly not sure you’ll get anything out of my further replies since you seem to have little interest in my actual points.
My original comment on the blatant falsehood that was the premise original post was as charitable as reasonably possible considering how untrue the premise was. If there was any actual evidence supporting the claim, it could have been added by an interlocutor.
I simply pointed out its most glaring flaws in plain language without value judgment. The main premise was false for even the most charitable version of the post. To be any more charitable would have required me to rewrite the entire thing since the false premise was written so deeply into the fabric of things. If they had restricted themselves to plausible things, it is quite possible it would have been a useful post, but this wasn’t.
I neither have the time nor the inclination to write out the entirety of what the argument should have been. I don’t really believe in Ideological Turing Tests, just like I don’t believe that Turing tests are a great measure for AI. It’s not that there aren’t uses for it, it’s just that those are niche (though an AI that reliably passes the Turing Test could make a lot of money for its creators.). I don’t have forever to fix bad arguments.
A basic outline of the initial argument in the original post is:
1)The singularity is moving slowly, but already upon us. We are in takeoff.
2)The takeoff will remain slow (though quick enough to be startling.)
3)Thus no singularity is clearly wrong.
4)Fast take off is also clearly false.
5)Scaling is all that matters.
6)Since scaling will be so expensive due to Landauer’s principle, high end AGI will happen, but not in private hands for a long time.
There are several more ‘implications’ of this that I won’t bother writing because they clearly rely on the former things.
1 is meant to prove 3,4,5 directly, while 2 absolutely needs 1 to be true.
2 and 5 are meant to set the stage for 6..
These were largely bare assertions (which there is a place for). I objected that point 1 was clearly false, rendering points 3,4,5,6 impossible to tell based on this argument structure and the available evidence, and 2 clearly meaningless (since it is defined falsely, something cannot remain a certain way if it is not already that way). (Even though I agree with 4 and 6, and and I could rewrite their argument to make 2 much more sensible.) Since there was no extra evidence beyond a known unsound argument, that rendered the rest of it irrelevant. The leap from 1 to 5 would be quite weak even if not for the fact the 1 is false.
7)And thus we should be very scared of not having any wake up call for when AI will become very dangerous.
7 is clearly unsupported at this point since all of the assumptions leading here are useless.
I like a good hypothetical, but I don’t really have any interest in continuing to engage with things that are that wrong factually, and won’t admit it.
‘The moon really is made of cheese, so what does that mean for how we should approach the sun?’ That is literally the level of uselessness I find this approach to detailing the state of AI and how that relates how to approach alignment to have. (Like I said, the version in my original comment was the charitable one.)
I could make an argument for or against anything they claim in this post, but it wouldn’t be a response to what they actually wrote, and I don’t see how that would be useful.
The failure mode is (for example) systematic inability to understand a sufficiently alien worldview that keeps relying on false claims of fact. The problem is attention being trained to avoid things that are branded with falsity, bias, and fallacy, even by association. One operationalization of a solution is Ideological Turing Test, diligently training to perchance eventually succeed in pretending to be the evil peddler of bunkum. Not very appealing.
I prefer the less specifically demanding method that is charity. It’s more about compartmentalizing the bunkum without snuffing it out. And it has less hazardous applications.
This is useless until it isn’t, and then it’s crucial.
As a clarification of my own worldview on AI and the Singularity, I am basically saying that we are in the early part of the exponential curve, and while AI’s short term effects are overhyped in say 5 years, in 50-100 years AI will change the world so much that it can be called a Singularity.
My biggest evidence comes from the WinoGrande dataset, which achieves 70.3% by GPT-3 without fine-tuning. While BERT went back to random chance-like accuracy, GPT has some common sense (Though worse than human common sense).
Also, GPT-3 can meta-learn languages the first time it receives the data.
And yeah, Vladimir Nesov correctly called my feels on the no fire alarm issue for AI. The problem of exponentials is that in Nesov’s words,
And that’s my theory of why AI progress looks so slow: We’re in the 70s era for AI, and the people using AI are companies.
There is an opposition between skepticism and charity. Charity is not blanket promotion of sloppy reasoning, it’s an insurance policy against misguided stringency in familiar modes of reasoning (exemplified by skepticism). This insurance is costly, it must pay its dues in attention to things that own reasoning finds sloppy or outright nonsense, where it doesn’t agree that its stringency would be misguided. This should still be only a minor portion of the budget.
It contains the damage with compartmentalization, but reserves the option of accepting elements of alien worldviews if they ever grow up, which they sometimes unexpectedly do. The obvious danger in the technique is that you start accepting wrong things because you permitted yourself to think them first. This can be prevented with blanket skepticism (or zealous faith, as the case may be), hence the opposition.
(You are being very sloppy in the arguments. I agree with most of what deepthoughtlife said on object level things, and gwern also made relevant points. The general technique of being skeptical of your own skepticism doesn’t say that skepticism is wrong in any specific case where it should be opposed. Sometimes it’s wrong, but it doesn’t follow in general, that’s why you compartmentalize the skepticism about skepticism, to prevent it from continually damaging your skepticism.)
I’m honestly not sure you’ll get anything out of my further replies since you seem to have little interest in my actual points.
My original comment on the blatant falsehood that was the premise original post was as charitable as reasonably possible considering how untrue the premise was. If there was any actual evidence supporting the claim, it could have been added by an interlocutor.
I simply pointed out its most glaring flaws in plain language without value judgment. The main premise was false for even the most charitable version of the post. To be any more charitable would have required me to rewrite the entire thing since the false premise was written so deeply into the fabric of things. If they had restricted themselves to plausible things, it is quite possible it would have been a useful post, but this wasn’t.
I neither have the time nor the inclination to write out the entirety of what the argument should have been. I don’t really believe in Ideological Turing Tests, just like I don’t believe that Turing tests are a great measure for AI. It’s not that there aren’t uses for it, it’s just that those are niche (though an AI that reliably passes the Turing Test could make a lot of money for its creators.). I don’t have forever to fix bad arguments.
A basic outline of the initial argument in the original post is:
1)The singularity is moving slowly, but already upon us. We are in takeoff.
2)The takeoff will remain slow (though quick enough to be startling.)
3)Thus no singularity is clearly wrong.
4)Fast take off is also clearly false.
5)Scaling is all that matters.
6)Since scaling will be so expensive due to Landauer’s principle, high end AGI will happen, but not in private hands for a long time.
There are several more ‘implications’ of this that I won’t bother writing because they clearly rely on the former things.
1 is meant to prove 3,4,5 directly, while 2 absolutely needs 1 to be true.
2 and 5 are meant to set the stage for 6..
These were largely bare assertions (which there is a place for). I objected that point 1 was clearly false, rendering points 3,4,5,6 impossible to tell based on this argument structure and the available evidence, and 2 clearly meaningless (since it is defined falsely, something cannot remain a certain way if it is not already that way). (Even though I agree with 4 and 6, and and I could rewrite their argument to make 2 much more sensible.) Since there was no extra evidence beyond a known unsound argument, that rendered the rest of it irrelevant. The leap from 1 to 5 would be quite weak even if not for the fact the 1 is false.
7)And thus we should be very scared of not having any wake up call for when AI will become very dangerous.
7 is clearly unsupported at this point since all of the assumptions leading here are useless.
I like a good hypothetical, but I don’t really have any interest in continuing to engage with things that are that wrong factually, and won’t admit it.
‘The moon really is made of cheese, so what does that mean for how we should approach the sun?’ That is literally the level of uselessness I find this approach to detailing the state of AI and how that relates how to approach alignment to have. (Like I said, the version in my original comment was the charitable one.)
I could make an argument for or against anything they claim in this post, but it wouldn’t be a response to what they actually wrote, and I don’t see how that would be useful.