Not having good evidence is no reason at all to wilfully ignore bad-in-some-respect evidence. It’s just usually not worth bothering with. Not always, and the distinction needs to be drawn the hard way.
Doesn’t apply when you are trying to justify something incredibly unrelated thing...especially when you are clearly wrong, and clearly claim that something has happened. Singularity is after so many steps and illogical leaps from where we are that there is little real evidence (all of which points away from imminent singularity.) Your claim is a completely untrue basis for the entire thing. Make a real argument, don’t simply beg the question.
I’m not the OP, I’m not making claims about technological singularity in this thread. I’m making claims about the policy of dismissing unfounded claims of any kind.
I admit, I didn’t actually check the name of the original poster versus yours. It’s not actually relevant to anything but my use of the pronoun ‘you’, and even that works perfectly well as a general ‘you’ (which is a quirk of English that is a bit strange). The rest of my response stands, it is a terrible argument to make about evidence. The original post is a textbook example of begging the question. Confident claims should not be made on effectively non-existent evidence.
What do you mean by “argument about evidence”? I’m not arguing that the evidence is good. I’m arguing that in general response to apparently terrible evidence should be a bit nuanced, there are exceptions to the rule of throwing it out, and some of them are difficult to notice. I’m not even claiming that this is one of those exceptions. So I’d call it an argument about handling of evidence rather than an argument about evidence.
You are definitely making an argument about the evidence’s status. What’s wrong is that you are smuggling in the assumption that there is evidence that is simply flawed rather than a lack of evidence. To ‘willfully ignore’ evidence requires that there be real evidence! The link appears to be a non-sequitur otherwise. It is hardly an isolated demand for rigor to state that there must be evidence at all on the matter being something that is clearly happening to have a retrospective on ‘why it HAPPENED, who WAS wrong, and who WAS right’. Thus, your argument is terrible.
There is a useful distinction between an argument not applying, that is being irrelevant, flawed-in-context, and it being terrible, that is invalid, flawed-in-itself.
you are smuggling in the assumption that there is evidence that is simply flawed rather than a lack of evidence
This is not an assumption that would benefit from being sneakily smuggled in. Instead this is the main claim I’m making, that even apparently missing evidence should still be considered a form of evidence, merely of a very disagreeable kind, probably not useful, yet occasionally holding a germ of truth, so not worth unconditionally dismissing.
The post I cited is targeted at impressions like this, that something is not a legitimate example of evidence, not worthy of consideration. Perhaps it’s not there at all! But something even being promoted to attention, getting noticed, is itself a sort of evidence. It may be familiar to not consider that a legitimate form of evidence, but it’s more accurate to say that it’s probably very weak (at least in that framing, without further development), not that it should never, on general principle, interact with state of knowledge in any way (including after more consideration).
Incidentally, evidence in this context is not something that should make one believe a claim with greater strength, it could as easily make one believe the claim less (even when the piece of evidence “says” to believe it more), for that is the only reason it can be convincing in the first place. So it’s not something to defend skepticism from. It just potentially shifts beliefs, or sets the stage for that eventually happening, in a direction that can’t be known in advance, so its effect couldn’t really be read off the label.
You did not make that claim itself. The link is not even on target for what you are claiming. It is a rant against unachievable demands for rigor, not a claim that all demands for basic coherency are wrong; the clear reason, which I already stated for why it isn’t evidence is because it is based entirely on a clearly false premise. If it doesn’t apply to the question at hand, it isn’t evidence! This is basic reasoning. When a thing hasn’t happened, and there is no evidence it is imminent, we can’t possibly have an actual retrospective, which is what this claims to be. If this were simply a post claiming that they think these schools are likely to be right, and these wrong, for some actual reasons, that would be a completely different kind of thing than this, which is entirely based on a clearly false claim of fact.
which is entirely based on a clearly false claim of fact
The failure mode is (for example) systematic inability to understand a sufficiently alien worldview that keeps relying on false claims of fact. The problem is attention being trained to avoid things that are branded with falsity, bias, and fallacy, even by association. One operationalization of a solution is Ideological Turing Test, diligently training to perchance eventually succeed in pretending to be the evil peddler of bunkum. Not very appealing.
As a clarification of my own worldview on AI and the Singularity, I am basically saying that we are in the early part of the exponential curve, and while AI’s short term effects are overhyped in say 5 years, in 50-100 years AI will change the world so much that it can be called a Singularity.
My biggest evidence comes from the WinoGrande dataset, which achieves 70.3% by GPT-3 without fine-tuning. While BERT went back to random chance-like accuracy, GPT has some common sense (Though worse than human common sense).
Also, GPT-3 can meta-learn languages the first time it receives the data.
And yeah, Vladimir Nesov correctly called my feels on the no fire alarm issue for AI. The problem of exponentials is that in Nesov’s words,
This is useless until it isn’t, and then it’s crucial.
And that’s my theory of why AI progress looks so slow: We’re in the 70s era for AI, and the people using AI are companies.
There is an opposition between skepticism and charity. Charity is not blanket promotion of sloppy reasoning, it’s an insurance policy against misguided stringency in familiar modes of reasoning (exemplified by skepticism). This insurance is costly, it must pay its dues in attention to things that own reasoning finds sloppy or outright nonsense, where it doesn’t agree that its stringency would be misguided. This should still be only a minor portion of the budget.
It contains the damage with compartmentalization, but reserves the option of accepting elements of alien worldviews if they ever grow up, which they sometimes unexpectedly do. The obvious danger in the technique is that you start accepting wrong things because you permitted yourself to think them first. This can be prevented with blanket skepticism (or zealous faith, as the case may be), hence the opposition.
(You are being very sloppy in the arguments. I agree with most of what deepthoughtlife said on object level things, and gwern also made relevant points. The general technique of being skeptical of your own skepticism doesn’t say that skepticism is wrong in any specific case where it should be opposed. Sometimes it’s wrong, but it doesn’t follow in general, that’s why you compartmentalize the skepticism about skepticism, to prevent it from continually damaging your skepticism.)
I’m honestly not sure you’ll get anything out of my further replies since you seem to have little interest in my actual points.
My original comment on the blatant falsehood that was the premise original post was as charitable as reasonably possible considering how untrue the premise was. If there was any actual evidence supporting the claim, it could have been added by an interlocutor.
I simply pointed out its most glaring flaws in plain language without value judgment. The main premise was false for even the most charitable version of the post. To be any more charitable would have required me to rewrite the entire thing since the false premise was written so deeply into the fabric of things. If they had restricted themselves to plausible things, it is quite possible it would have been a useful post, but this wasn’t.
I neither have the time nor the inclination to write out the entirety of what the argument should have been. I don’t really believe in Ideological Turing Tests, just like I don’t believe that Turing tests are a great measure for AI. It’s not that there aren’t uses for it, it’s just that those are niche (though an AI that reliably passes the Turing Test could make a lot of money for its creators.). I don’t have forever to fix bad arguments.
A basic outline of the initial argument in the original post is:
1)The singularity is moving slowly, but already upon us. We are in takeoff.
2)The takeoff will remain slow (though quick enough to be startling.)
3)Thus no singularity is clearly wrong.
4)Fast take off is also clearly false.
5)Scaling is all that matters.
6)Since scaling will be so expensive due to Landauer’s principle, high end AGI will happen, but not in private hands for a long time.
There are several more ‘implications’ of this that I won’t bother writing because they clearly rely on the former things.
1 is meant to prove 3,4,5 directly, while 2 absolutely needs 1 to be true.
2 and 5 are meant to set the stage for 6..
These were largely bare assertions (which there is a place for). I objected that point 1 was clearly false, rendering points 3,4,5,6 impossible to tell based on this argument structure and the available evidence, and 2 clearly meaningless (since it is defined falsely, something cannot remain a certain way if it is not already that way). (Even though I agree with 4 and 6, and and I could rewrite their argument to make 2 much more sensible.) Since there was no extra evidence beyond a known unsound argument, that rendered the rest of it irrelevant. The leap from 1 to 5 would be quite weak even if not for the fact the 1 is false.
7)And thus we should be very scared of not having any wake up call for when AI will become very dangerous.
7 is clearly unsupported at this point since all of the assumptions leading here are useless.
I like a good hypothetical, but I don’t really have any interest in continuing to engage with things that are that wrong factually, and won’t admit it.
‘The moon really is made of cheese, so what does that mean for how we should approach the sun?’ That is literally the level of uselessness I find this approach to detailing the state of AI and how that relates how to approach alignment to have. (Like I said, the version in my original comment was the charitable one.)
I could make an argument for or against anything they claim in this post, but it wouldn’t be a response to what they actually wrote, and I don’t see how that would be useful.
Not having good evidence is no reason at all to wilfully ignore bad-in-some-respect evidence. It’s just usually not worth bothering with. Not always, and the distinction needs to be drawn the hard way.
Doesn’t apply when you are trying to justify something incredibly unrelated thing...especially when you are clearly wrong, and clearly claim that something has happened. Singularity is after so many steps and illogical leaps from where we are that there is little real evidence (all of which points away from imminent singularity.) Your claim is a completely untrue basis for the entire thing. Make a real argument, don’t simply beg the question.
I’m not the OP, I’m not making claims about technological singularity in this thread. I’m making claims about the policy of dismissing unfounded claims of any kind.
I admit, I didn’t actually check the name of the original poster versus yours. It’s not actually relevant to anything but my use of the pronoun ‘you’, and even that works perfectly well as a general ‘you’ (which is a quirk of English that is a bit strange). The rest of my response stands, it is a terrible argument to make about evidence. The original post is a textbook example of begging the question. Confident claims should not be made on effectively non-existent evidence.
What do you mean by “argument about evidence”? I’m not arguing that the evidence is good. I’m arguing that in general response to apparently terrible evidence should be a bit nuanced, there are exceptions to the rule of throwing it out, and some of them are difficult to notice. I’m not even claiming that this is one of those exceptions. So I’d call it an argument about handling of evidence rather than an argument about evidence.
What’s wrong with the argument about handling of evidence I cited?
You are definitely making an argument about the evidence’s status. What’s wrong is that you are smuggling in the assumption that there is evidence that is simply flawed rather than a lack of evidence. To ‘willfully ignore’ evidence requires that there be real evidence! The link appears to be a non-sequitur otherwise. It is hardly an isolated demand for rigor to state that there must be evidence at all on the matter being something that is clearly happening to have a retrospective on ‘why it HAPPENED, who WAS wrong, and who WAS right’. Thus, your argument is terrible.
There is a useful distinction between an argument not applying, that is being irrelevant, flawed-in-context, and it being terrible, that is invalid, flawed-in-itself.
This is not an assumption that would benefit from being sneakily smuggled in. Instead this is the main claim I’m making, that even apparently missing evidence should still be considered a form of evidence, merely of a very disagreeable kind, probably not useful, yet occasionally holding a germ of truth, so not worth unconditionally dismissing.
The post I cited is targeted at impressions like this, that something is not a legitimate example of evidence, not worthy of consideration. Perhaps it’s not there at all! But something even being promoted to attention, getting noticed, is itself a sort of evidence. It may be familiar to not consider that a legitimate form of evidence, but it’s more accurate to say that it’s probably very weak (at least in that framing, without further development), not that it should never, on general principle, interact with state of knowledge in any way (including after more consideration).
Incidentally, evidence in this context is not something that should make one believe a claim with greater strength, it could as easily make one believe the claim less (even when the piece of evidence “says” to believe it more), for that is the only reason it can be convincing in the first place. So it’s not something to defend skepticism from. It just potentially shifts beliefs, or sets the stage for that eventually happening, in a direction that can’t be known in advance, so its effect couldn’t really be read off the label.
You did not make that claim itself. The link is not even on target for what you are claiming. It is a rant against unachievable demands for rigor, not a claim that all demands for basic coherency are wrong; the clear reason, which I already stated for why it isn’t evidence is because it is based entirely on a clearly false premise. If it doesn’t apply to the question at hand, it isn’t evidence! This is basic reasoning. When a thing hasn’t happened, and there is no evidence it is imminent, we can’t possibly have an actual retrospective, which is what this claims to be. If this were simply a post claiming that they think these schools are likely to be right, and these wrong, for some actual reasons, that would be a completely different kind of thing than this, which is entirely based on a clearly false claim of fact.
The failure mode is (for example) systematic inability to understand a sufficiently alien worldview that keeps relying on false claims of fact. The problem is attention being trained to avoid things that are branded with falsity, bias, and fallacy, even by association. One operationalization of a solution is Ideological Turing Test, diligently training to perchance eventually succeed in pretending to be the evil peddler of bunkum. Not very appealing.
I prefer the less specifically demanding method that is charity. It’s more about compartmentalizing the bunkum without snuffing it out. And it has less hazardous applications.
This is useless until it isn’t, and then it’s crucial.
As a clarification of my own worldview on AI and the Singularity, I am basically saying that we are in the early part of the exponential curve, and while AI’s short term effects are overhyped in say 5 years, in 50-100 years AI will change the world so much that it can be called a Singularity.
My biggest evidence comes from the WinoGrande dataset, which achieves 70.3% by GPT-3 without fine-tuning. While BERT went back to random chance-like accuracy, GPT has some common sense (Though worse than human common sense).
Also, GPT-3 can meta-learn languages the first time it receives the data.
And yeah, Vladimir Nesov correctly called my feels on the no fire alarm issue for AI. The problem of exponentials is that in Nesov’s words,
And that’s my theory of why AI progress looks so slow: We’re in the 70s era for AI, and the people using AI are companies.
There is an opposition between skepticism and charity. Charity is not blanket promotion of sloppy reasoning, it’s an insurance policy against misguided stringency in familiar modes of reasoning (exemplified by skepticism). This insurance is costly, it must pay its dues in attention to things that own reasoning finds sloppy or outright nonsense, where it doesn’t agree that its stringency would be misguided. This should still be only a minor portion of the budget.
It contains the damage with compartmentalization, but reserves the option of accepting elements of alien worldviews if they ever grow up, which they sometimes unexpectedly do. The obvious danger in the technique is that you start accepting wrong things because you permitted yourself to think them first. This can be prevented with blanket skepticism (or zealous faith, as the case may be), hence the opposition.
(You are being very sloppy in the arguments. I agree with most of what deepthoughtlife said on object level things, and gwern also made relevant points. The general technique of being skeptical of your own skepticism doesn’t say that skepticism is wrong in any specific case where it should be opposed. Sometimes it’s wrong, but it doesn’t follow in general, that’s why you compartmentalize the skepticism about skepticism, to prevent it from continually damaging your skepticism.)
I’m honestly not sure you’ll get anything out of my further replies since you seem to have little interest in my actual points.
My original comment on the blatant falsehood that was the premise original post was as charitable as reasonably possible considering how untrue the premise was. If there was any actual evidence supporting the claim, it could have been added by an interlocutor.
I simply pointed out its most glaring flaws in plain language without value judgment. The main premise was false for even the most charitable version of the post. To be any more charitable would have required me to rewrite the entire thing since the false premise was written so deeply into the fabric of things. If they had restricted themselves to plausible things, it is quite possible it would have been a useful post, but this wasn’t.
I neither have the time nor the inclination to write out the entirety of what the argument should have been. I don’t really believe in Ideological Turing Tests, just like I don’t believe that Turing tests are a great measure for AI. It’s not that there aren’t uses for it, it’s just that those are niche (though an AI that reliably passes the Turing Test could make a lot of money for its creators.). I don’t have forever to fix bad arguments.
A basic outline of the initial argument in the original post is:
1)The singularity is moving slowly, but already upon us. We are in takeoff.
2)The takeoff will remain slow (though quick enough to be startling.)
3)Thus no singularity is clearly wrong.
4)Fast take off is also clearly false.
5)Scaling is all that matters.
6)Since scaling will be so expensive due to Landauer’s principle, high end AGI will happen, but not in private hands for a long time.
There are several more ‘implications’ of this that I won’t bother writing because they clearly rely on the former things.
1 is meant to prove 3,4,5 directly, while 2 absolutely needs 1 to be true.
2 and 5 are meant to set the stage for 6..
These were largely bare assertions (which there is a place for). I objected that point 1 was clearly false, rendering points 3,4,5,6 impossible to tell based on this argument structure and the available evidence, and 2 clearly meaningless (since it is defined falsely, something cannot remain a certain way if it is not already that way). (Even though I agree with 4 and 6, and and I could rewrite their argument to make 2 much more sensible.) Since there was no extra evidence beyond a known unsound argument, that rendered the rest of it irrelevant. The leap from 1 to 5 would be quite weak even if not for the fact the 1 is false.
7)And thus we should be very scared of not having any wake up call for when AI will become very dangerous.
7 is clearly unsupported at this point since all of the assumptions leading here are useless.
I like a good hypothetical, but I don’t really have any interest in continuing to engage with things that are that wrong factually, and won’t admit it.
‘The moon really is made of cheese, so what does that mean for how we should approach the sun?’ That is literally the level of uselessness I find this approach to detailing the state of AI and how that relates how to approach alignment to have. (Like I said, the version in my original comment was the charitable one.)
I could make an argument for or against anything they claim in this post, but it wouldn’t be a response to what they actually wrote, and I don’t see how that would be useful.