I find that when I talk about this issue with people who clearly have expert knowledge of AI (including the people who came to the AAAI symposium at Stanford last year, and all of the other practising AI builders who are my colleagues), the points I make are not only understood but understood so clearly that they tell me things like “This is just obvious, really, so all you are doing is wasting your time trying to convince a community that is essentially comprised of amateurs” (That is a direct quote from someone at the symposium).
I always want to make myself as clear as I can. I have invested a lot of my time trying to address the concerns of many people who responded to the paper. I am absolutely sure I could do better.
We’re all amateurs in the field of AI, it’s just that some of us actually know it. Seriously, don’t pull the credentials card. I’m not impressed. I know exactly how “hard” it is to pay the AAAI a hundred and fifty dollars a year for membership, and three hundred dollars to attend their conference. Does claiming to have spent four hundred and fifty dollars make you an expert? What about bringing up that it’s in “Stanford”? What about insulting everybody you’re arguing with?
I’m a “practicing AI builder”—what a nonsense term—although my little heuristics engine is actually running in the real world, processing business data and automating hypothesis elevation work for humans (who have the choice of agreeing with its best hypothesis, selecting among its other hypotheses, or entering their own) - that is, it’s actually picking strawberries.
Moving past tit-for-tat on your hostile introduction paragraph, I don’t doubt your desire to be clear. But you have a conclusion you’re very obviously trying to reach, and you leave huge gaps on your way to get there. The fact that others who want to reach the same conclusion overlook the gaps doesn’t demonstrate anything. And what’s your conclusion? That we don’t have to worry about poorly-designed AI being dangerous, because… contextual information, or something. Honestly, I’m not even sure anymore.
Then you propose a model, which you suggest has been modeled after the single most dangerous brain on the planet—as proof that it’s safe! Seriously.
As for whether you could do better? No, not in your current state of mind. Your hubris prevents you from doing better. You’re convinced you know better than any of the people you’re talking with, and they’re ignorant amateurs.
When someone repeatedly distorts and misrepresents what is said in a paper, then blames the author of the paper for being unclear … then hears the author carefully explain the distortions and misrepresentations, and still repeats them without understanding ….
Because that was the practical result, not the problem itself, which is that the conversation wasn’t going anywhere, and he didn’t seem interested in it going anywhere.
Maybe, given the number of times you feel you’ve had to repeat yourself, you’re not making yourself as clear as you think you are.
I find that when I talk about this issue with people who clearly have expert knowledge of AI (including the people who came to the AAAI symposium at Stanford last year, and all of the other practising AI builders who are my colleagues), the points I make are not only understood but understood so clearly that they tell me things like “This is just obvious, really, so all you are doing is wasting your time trying to convince a community that is essentially comprised of amateurs” (That is a direct quote from someone at the symposium).
I always want to make myself as clear as I can. I have invested a lot of my time trying to address the concerns of many people who responded to the paper. I am absolutely sure I could do better.
We’re all amateurs in the field of AI, it’s just that some of us actually know it. Seriously, don’t pull the credentials card. I’m not impressed. I know exactly how “hard” it is to pay the AAAI a hundred and fifty dollars a year for membership, and three hundred dollars to attend their conference. Does claiming to have spent four hundred and fifty dollars make you an expert? What about bringing up that it’s in “Stanford”? What about insulting everybody you’re arguing with?
I’m a “practicing AI builder”—what a nonsense term—although my little heuristics engine is actually running in the real world, processing business data and automating hypothesis elevation work for humans (who have the choice of agreeing with its best hypothesis, selecting among its other hypotheses, or entering their own) - that is, it’s actually picking strawberries.
Moving past tit-for-tat on your hostile introduction paragraph, I don’t doubt your desire to be clear. But you have a conclusion you’re very obviously trying to reach, and you leave huge gaps on your way to get there. The fact that others who want to reach the same conclusion overlook the gaps doesn’t demonstrate anything. And what’s your conclusion? That we don’t have to worry about poorly-designed AI being dangerous, because… contextual information, or something. Honestly, I’m not even sure anymore.
Then you propose a model, which you suggest has been modeled after the single most dangerous brain on the planet—as proof that it’s safe! Seriously.
As for whether you could do better? No, not in your current state of mind. Your hubris prevents you from doing better. You’re convinced you know better than any of the people you’re talking with, and they’re ignorant amateurs.
When someone repeatedly distorts and misrepresents what is said in a paper, then blames the author of the paper for being unclear … then hears the author carefully explain the distortions and misrepresentations, and still repeats them without understanding ….
Well, there is a limit.
Not to suggest that you are implying it, but rather as a reminder—nobody is deliberately misunderstanding you here.
But at any rate, I don’t think we’re accomplishing anything here except driving your karma score lower, so by your leave, I’m tapping out.
Why not raise his karma score instead?
Because that was the practical result, not the problem itself, which is that the conversation wasn’t going anywhere, and he didn’t seem interested in it going anywhere.