Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The current academic publishing system is so broken that I can’t see it surviving for much longer. In addition to outright fraud, journals are chock full of low quality papers and papers that cannot be reproduced. Citations are a useless metric because of how circular the citation networks are.

The best way forward is probably some metric of reproducibility. Can your paper/experiment be reproduced? Has anyone done so? Did they succeed or fail? Did they publish their results? That would quickly separate the wheat from the chaff.



This would indeed be great, but the impact on one's career of reproducing another's work would have to be equivalent to that of the original publication on the original author's career. Otherwise, nobody will want to spend the time certifying another's work.

Could have this so that it's a multi-stage process where all involved parties become authors on the paper, but this would need a substantial shift in what academia is today.


That’s exactly what it should be. After author there should be multiple lines for individuals/groups that reproduced that research. Prizes should be divvied amongst original researchers and the first N groups to reproduce it.

Also, I’d love to see the Masters degree become focused on reproducing others work for two years. You do original research in a PhD only.


Or...a scientist's reputation can also be bolstered by how many papers they are able to reproduce, or debunk.

Debunking science also brings value to science. The incentivizes of the system needs to change.

As an analogy, performing code review brings value to software development.


People have been saying this for decades. Both the idea that the academic publishing model is about to die (due to too many papers, low quality, broken peer review, etc.), and that we need to focus on reproducibility.

These ideas itself are valid but not nuanced. I could also argue that academic publishing as at its most successful ever with science advancing at an incredible pace even as complexity is so high, and I could argue that everyone is now thinking about reproducibility and it's baked into the academic process these days (for decades, having another lab reproduce your paper is critical to it being accepted by the community even after being accepted for publication).

Maybe an analogy I would use it if I were to say "software is so broken, the quality is getting worse over time, new programmers can't fizzbizz, security bugs everywhere, no one can even reliably make a heathcare website or computer game on time." Same argument that you're making.


> The best way forward is probably some metric of reproducibility. Can your paper/experiment be reproduced? Has anyone done so? Did they succeed or fail? Did they publish their results? That would quickly separate the wheat from the chaff.

I wish it was more common to cite a paper and the first few attempts at reproducing it side by side.

It would reward the reproducibility study with a lot of citations and reinforce the credibility of the cited original paper, because the reader now knows that the results quoted have been certified elsewhere.

Sure it can be gamed, but if you see $shady_institution and the reproducibility study was made by $other_shady_institution you can still draw your own conclusions...




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: