Also, this makes me wonder if the SIAI’s intention to publish in philosophy journals is such a good idea. Presumably part of the point was for them to gain status by being associated with respected academic thinkers. But this isn’t really the kind of thinking anyone would want to be associated with...
The way I look at it, it’s ‘if such can survive peer review, what do people make of things whose authors either did not try to pass peer review or could not pass peer review? They probably think pretty poorly of them.’
I can’t speak to this particular article, but oftentimes special editions of journals, like this one (i.e. effectively a symposium on the work of another), are not subjected to rigorous peer review. The responses are often solicited by the editors and there is minimal correction or critique of the content of the papers, certainly nothing like you’d normally get for an unsolicited article in a top philosophy journal.
But, to reiterate, I can’t say whether or not the Journal of Consciousness Studies did that in this instance.
I can’t speak to this particular article, but oftentimes special editions of journals, like this one (i.e. effectively a symposium on the work of another), are not subjected to rigorous peer review.
On the one hand, this is the cached defense that I have for the Sokal hoax, so now I have an internal conflict on my hands. If I believe that Tipler’s paper shouldn’t have been published, then it’s unclear why Sokal’s should have been.
Someone think the visibility for philosophers have pratically impact for the solution of technical problems? Apparently who can possibly cause some harm in the near time are AI researchs, but much of these people are scalating Internet flux or working on their own projects.
Gaining visibility is a good thing when what’s needed is social acceptance, or when is necessary more people to solution a problem. Publishing in peer reviews (philosophical)journals can give more scholars to the cause, but more people caring about AI don’t mean a good thing per se.
Argh.
Also, this makes me wonder if the SIAI’s intention to publish in philosophy journals is such a good idea. Presumably part of the point was for them to gain status by being associated with respected academic thinkers. But this isn’t really the kind of thinking anyone would want to be associated with...
The way I look at it, it’s ‘if such can survive peer review, what do people make of things whose authors either did not try to pass peer review or could not pass peer review? They probably think pretty poorly of them.’
I can’t speak to this particular article, but oftentimes special editions of journals, like this one (i.e. effectively a symposium on the work of another), are not subjected to rigorous peer review. The responses are often solicited by the editors and there is minimal correction or critique of the content of the papers, certainly nothing like you’d normally get for an unsolicited article in a top philosophy journal.
But, to reiterate, I can’t say whether or not the Journal of Consciousness Studies did that in this instance.
On the one hand, this is the cached defense that I have for the Sokal hoax, so now I have an internal conflict on my hands. If I believe that Tipler’s paper shouldn’t have been published, then it’s unclear why Sokal’s should have been.
Oh dear, oh dear. How to resolve this conflict?
Perhaps rum...
Someone think the visibility for philosophers have pratically impact for the solution of technical problems? Apparently who can possibly cause some harm in the near time are AI researchs, but much of these people are scalating Internet flux or working on their own projects.
Gaining visibility is a good thing when what’s needed is social acceptance, or when is necessary more people to solution a problem. Publishing in peer reviews (philosophical)journals can give more scholars to the cause, but more people caring about AI don’t mean a good thing per se.