Hacker News new | past | comments | ask | show | jobs | submit
I came here to say something similar. As someone who works in a field that applies machine learning but is not purely focused on it, I interact with people who think that arXiv is the only relevant platform and that they don't need to submit their work to any journal, as well as people who still think that preprints don't count at all and that data isn't published until it's printed in an academic journal. It can feel like a clash of worlds.

I think both sides could learn from the other. In the case of ML, I understand the desire to move fast and that average time to publication of 250-300 days in some of the top-tier journals can feel like an unnecessary burden. But having been on both sides of peer review, there is value to the system and it has made for better work.

Not doing any of it follows the same spirit as not benchmarking your approach against more than maybe one alternative and that already as an after-thought. Or benchmaxxing but not exploring the actual real-world consequences, time and cost trade offs, etc.

Now, is academic publishing perfect? Of course not, very very far from it. It desperately needs to be reformed to keep it economically accessible, time efficient for both authors, editors and peer reviewers and to prevent the "hot topic of the day" from dominating journals and making sure that peer review aligns with the needs of the community and actually improves the quality of the work, rather than having "malicious peer review" to get some citations or pet peeves in.

Given the power that the ML field holds and the interesting experiments with open review, I would wish for the field to engage more with the scientific system at large and perhaps try to drive reforms and improve it, rather than completely abandoning it and treating a PDF hosting service as a journal (ofc, preprints would still be desirable and are important, but they can not carry the entire field alone).

Simply anticipating basic push backs from reviewers makes sure that you do a somewhat thorough job. Not 100% thorough and the reviews are sometimes frivolous and lazy and stupid. But just knowing that what you put out there has to pass the admittedly noisily gatekept gate of peer review overall improves papers in my estimation. There is also a negative side because people try to hide limitations and honest assessments and cherry pick and curate their tables more in anticipation of knee jerk reviewers but overall I think without any peer review, author culture would become much more lax and bombastic and generally trend toward engagement bait and social media attention optimized stuff.

The current balance where people wrote a paper with reviers in mind, upload it to Arxiv before the review concludes and keep it on Arxiv even if rejected is a nice balance. People get to form their own opinion on it but there is also enough self-imposed quality control on it just due to wanting it to pass peer review, that even if it doesn't pass peer review, it is still better than if people write it in a way that doesn't care or anticipate peer review. And this works because people are somewhat incentivized to get peer reviewed official publications too. But being rejected is not the end of the world either because people can already read it and build on it based on Arxiv.

I really am not sure about that: https://biologue.plos.org/wp-content/uploads/sites/7/2020/05...

The problem is that "optimizing for peer-review" is not the same thing as optimizing for quality. E.g., I like to add a few tongue-in-cheeks to entertain the reader. But then I have to worry endlessly about anal-retentive reviewers who refuse to see the big picture.

Currently a kind of rule of thumb is that a PhD student can graduate after approximately 3 papers published in a good peer reviewed venue.

If peer review were to go away, this whole academic system would get into a crisis. It's dysfunctional and has many problems but it's kinda load bearing for the system to chug along.

No hard rule, no crisis.

Maybe we can go back to very opinionated “true” academia,

where there are institutional gatekeepers,

but they mostly get it right on who to award (and not),

vs the current game of

“whoever plays ball with funding sources the best = the best academic”,

which is obviously bullshit.

You'll still need to convince the purseholders to pay you, and they'll want some objective metric to measure your output, and whatever metric they pick will be gamed.
The point of my comment was,

in much earlier institutions of knowledge and excellence,

the only transparent metric was whether or not they approved you.

You may have delivered value in peer review, but on the whole, peer review delivers negative value. https://www.experimental-history.com/p/the-rise-and-fall-of-...

The arXiv vs journal debate seems a lot like 'should the work get done, or should the work get certified' that you see all over 'institutions', and if the certification does not actually catch frauds or errors, it's not making the foundations stronger, which is usually the only justification for the latter side.

I've noticed it's field dependent. Some fields don't really feel much need to publish in a real journal.

Others (at least in chemistry) will accept it, but it raises concern if a paper is only available as a preprint.