Have you also tried reviewing for conferences like NeurIPS? I’d be curious what the differences are.
Some people send papers to TMLR when they think they wouldn’t be accepted to the big conferences due to not being that “impactful”—which makes sense since TMLR doesn’t evaluate impact. It’s thus possible that the median TMLR submission is worse than the median conference submission.
I reviewed for iclr this year, and found it somewhat more rewarding; the papers were better, and I learned something somewhat useful from writing my reviews.
In my experience, ML folks submit to journals when:
Their work greatly exceeds the scope of 8 pages
They have been rejected multiple times from first- (or even second-)tier conferences
For the first reason, I think the best papers in TMLR are probably on par with (or better than) the best papers at ML conferences, but you’re right that the median could be worse.
Low-confidence take: Length might be a reasonable heuristic to filter out the latter category of work.
Have you also tried reviewing for conferences like NeurIPS? I’d be curious what the differences are.
Some people send papers to TMLR when they think they wouldn’t be accepted to the big conferences due to not being that “impactful”—which makes sense since TMLR doesn’t evaluate impact. It’s thus possible that the median TMLR submission is worse than the median conference submission.
I reviewed for iclr this year, and found it somewhat more rewarding; the papers were better, and I learned something somewhat useful from writing my reviews.
In my experience, ML folks submit to journals when:
Their work greatly exceeds the scope of 8 pages
They have been rejected multiple times from first- (or even second-)tier conferences
For the first reason, I think the best papers in TMLR are probably on par with (or better than) the best papers at ML conferences, but you’re right that the median could be worse.
Low-confidence take: Length might be a reasonable heuristic to filter out the latter category of work.