I feel like every week there’s a post that says, I might be naive but why can’t we just do X, and X is already well known and not considered sufficient. So it’s easy to see a post claiming a relatively direct solution as just being in that category.
The amount of effort and thinking in this case, plus the reputation of the poster, draws a clear distinction between the useless posts and this one, but it’s easy to imagine people pattern matching into believing that this is also probably useless without engaging with it.
(Ah, to clarify: I wasn’t saying that Kaj’s post seems insane; I was referring to the fact that lots of thinking/discourse in general about AI seems to be dangerously insane.)
I feel like every week there’s a post that says, I might be naive but why can’t we just do X, and X is already well known and not considered sufficient. So it’s easy to see a post claiming a relatively direct solution as just being in that category.
The amount of effort and thinking in this case, plus the reputation of the poster, draws a clear distinction between the useless posts and this one, but it’s easy to imagine people pattern matching into believing that this is also probably useless without engaging with it.
(Ah, to clarify: I wasn’t saying that Kaj’s post seems insane; I was referring to the fact that lots of thinking/discourse in general about AI seems to be dangerously insane.)