Hacker News new | past | comments | ask | show | jobs | submit
I'm not sure why we're so focused on filtering what gets into arxiv (which is an uphill battle and DOA at this point) vs fixing the indexing, i.e. the page rank of academia.

Google "sorted out" a messy web with pagerank. Academic papers link to each others. What prevents us from building a ranking from there?

I'm conscious I might be over-simplifying things, but curious to see what I am missing.

I am of the same opinion, and ultimately ArXiv becoming a journal that can prevent one from publishing a paper — no matter how junk it is — would pretty much kill its purpose. But I suppose that now when flooding the interned with LLM-generated garbage is almost endorsed by some satanic people, it is pretty much a security issue to have some sort of filter on uploads.

Now, honestly, I have no idea why would one spend resources on uploading terabytes of LLM garbage to arXiv, but they sure can. Even if some crazy person is publishing like 2 nonsense papers daily, it is no harm and, if anything, valid data for psychology research. But if somebody actually floods it with non-human-generated content, well, I suppose it isn't even that expensive to make ArXiv totally unusable (and perhaps even unfeasible to host). So there has to be some filtering. But only to prevent the abuse.

Otherwise, I indeed think that proper ranking, linking and user-driven moderation (again, not to prevent anybody from posting anything, but to label papers as more interesting for the specific community) is the only right way to go.

Page rank was inspired by bibliometrics and evaluation of science publications. It's messed up now because of the rankings. Further fiddling with ranking will not fix the problem.
loading story #47456441