Hacker News new | past | comments | ask | show | jobs | submit
There’s an accurate way to confirm fraud: look for inconsistencies and replicate experiments.

If the fraudsters “fail to replicate” legitimate experiments, ask them for details/proof, and replicate the experiment yourself while providing more details/proof. Either they’re running a different experiment, their details have inconsistencies, or they have unreasonable omissions.

Of course this is slightly messy too. Fraudsters are probably always incorrect, of course they could have stolen the data. But being incorrect doesn't mean your intentionally committing fraud.
That would be great if journals bothered publishing replication studies. But since they don't, researchers can't get adequate funding to perform them, and since they can't perform them, they don't exist.

We can't look for failed replication experiments if none exist.

that approach is accurate, but not scalable.

the effort to publish a fraudulent study is less (sometimes much less) than the effort to replicate a study.

Yeah, but this happens all the time.

>>95% of the time, the fraudsters get off scot-free. Look at Dan Ariely: Caught red-handed faking data in Excel using the stupidest approach imaginable, and outed as a sex pest in the Epstein files. Duke is still giving him their full backing.

It’s easy to find fraud, but what’s the point if our institutions have rotten all the way through and don’t care, even when there’s a smoking gun?

Is it that easy?

Machine Learning papers, for example, used to have a terrible reputation for being inconsistent and impossible to replicate.

That didn't make them (all) fraudulent, because that requires intent to deceive.

loading story #47337757