Could you at least give some signal of your idea’s quality that distinguishes it from the millions with hopeless ideas who scream “You guys are doing it all wrong, I’ve got something that’s just totally different from everything else and will get it right this time”?
Because a lot of what you’ve said so far isn’t promising.
Yes, I read that part of your comment. But having posted on the order of ~1500 words on your idea by now (this article + our past exchange), I still can’t find a sign of anything promising, and you’ve had more than enough space by now to distinguish yourself from the dime-a-dozen folks claiming to have all the answers on AI.
I strongly recommend that you look at whatever you have prepared for your next article, and cut it down to about 500 words in which you get straight to the point.
LW is a great site because of its frequent comments and articles from people who have assimilated Eliezer Yudkowsky’s lessons on rationality; I’d hate to see it turn into a platform for just any AI idea that someone thinks is the greatest ever.
Could you at least give some signal of your idea’s quality that distinguishes it from the millions with hopeless ideas who scream “You guys are doing it all wrong, I’ve got something that’s just totally different from everything else and will get it right this time”?
Because a lot of what you’ve said so far isn’t promising.
Yikes, take it easy. When I said “let’s argue”, I meant let’s argue after I’ve made some of my main points.
Yes, I read that part of your comment. But having posted on the order of ~1500 words on your idea by now (this article + our past exchange), I still can’t find a sign of anything promising, and you’ve had more than enough space by now to distinguish yourself from the dime-a-dozen folks claiming to have all the answers on AI.
I strongly recommend that you look at whatever you have prepared for your next article, and cut it down to about 500 words in which you get straight to the point.
LW is a great site because of its frequent comments and articles from people who have assimilated Eliezer Yudkowsky’s lessons on rationality; I’d hate to see it turn into a platform for just any AI idea that someone thinks is the greatest ever.
Which will be soon, right?