I signed up to be a reviewer for the TMLR journal, in the hope of learning more about ML research and ML papers. So far I’ve found the experience quite frustrating: all 3 of the papers I’ve reviewed have been fairly bad and sort of hard to understand, and it’s taken me a while to explain to the authors what I think is wrong with the work.
Have you also tried reviewing for conferences like NeurIPS? I’d be curious what the differences are.
Some people send papers to TMLR when they think they wouldn’t be accepted to the big conferences due to not being that “impactful”—which makes sense since TMLR doesn’t evaluate impact. It’s thus possible that the median TMLR submission is worse than the median conference submission.
I reviewed for iclr this year, and found it somewhat more rewarding; the papers were better, and I learned something somewhat useful from writing my reviews.
In my experience, ML folks submit to journals when:
Their work greatly exceeds the scope of 8 pages
They have been rejected multiple times from first- (or even second-)tier conferences
For the first reason, I think the best papers in TMLR are probably on par with (or better than) the best papers at ML conferences, but you’re right that the median could be worse.
Low-confidence take: Length might be a reasonable heuristic to filter out the latter category of work.
I signed up to be a reviewer for the TMLR journal, in the hope of learning more about ML research and ML papers. So far I’ve found the experience quite frustrating: all 3 of the papers I’ve reviewed have been fairly bad and sort of hard to understand, and it’s taken me a while to explain to the authors what I think is wrong with the work.
Have you also tried reviewing for conferences like NeurIPS? I’d be curious what the differences are.
Some people send papers to TMLR when they think they wouldn’t be accepted to the big conferences due to not being that “impactful”—which makes sense since TMLR doesn’t evaluate impact. It’s thus possible that the median TMLR submission is worse than the median conference submission.
I reviewed for iclr this year, and found it somewhat more rewarding; the papers were better, and I learned something somewhat useful from writing my reviews.
In my experience, ML folks submit to journals when:
Their work greatly exceeds the scope of 8 pages
They have been rejected multiple times from first- (or even second-)tier conferences
For the first reason, I think the best papers in TMLR are probably on par with (or better than) the best papers at ML conferences, but you’re right that the median could be worse.
Low-confidence take: Length might be a reasonable heuristic to filter out the latter category of work.