I contributed nothing other than a statistical framework which was discarded when it broke their predefined conclusion.
A few computer science friends of mine worked at a social science department during university. Their tasks included maintaining the computers, but also support the researchers with experiment design (if computers were involved) and statistical analysis. They got into trouble because they didn't want to use unsound or incorrect methods.
The general train of thought was not "does the data confirm my hypothesis?" but "how can I make my data confirm my hypothesis?" instead. Often experiments were biased to achieve the desired results.
As a result, these scientific misconduct was business as usual and the guys eventually quit.
At least in the social sciences there is an expectation of having some data!
Research fraud is common pretty much everywhere in academia, especially where there's money, i.e. adjacent to industry.
Observations: Firstly inventing a conclusion is a big problem. I'm not even talking about a hypothesis that needs to be tested but a conclusion. A vague ambiguous hypothesis which was likely true was invented to support the conclusion and the relationship inverted. Then data was selected and fitted until there was a level of confidence where it was worth publishing it. Secondly they were using very subjective data collection methods by extremely biased people then mangling and interpolating it to make it look like there was more observation data than there was. Thirdly when you do some honest research and not publish because it looks bad saying that the entire field is compromised for the conference coming up which everyone is really looking forward to and has booked flights and hotels already.
If you want to read some of the hellish bullshit, look up critique of the Q methodology.