My initial reaction is that this study seems to delegate the classification of misinformation to a set of fact checkers and journalists. It then uses this to classify links as being either misinformation or disinformation, based on a trustworthiness score. Unfortunately, I can’t open the table of exact fact checkers and journalists because none of the links work on my mobile browser, so I’ll have to just guess at the contents for now.
Delegating classification of truth to these third parties allows for significant bias in the results. Most conservatives consider main stream media and fact checkers to have a significant progressive bias. If correct, this would explain at least some of the results of this study. I haven’t done a thorough analysis myself, so I can’t say either way, but it would be worth investigating.
The study also mentions that many users could have been bots. I suspect this could also have skewed the results. This is mentioned in the abstract, so I suspect it’s addressed later in the paper.
Either way, continuing to read… very interesting study.
As for your objection and concern - the study deals with that issue by letting participants decide themselves, what counts as high quality and low quality.
This holds if you look at outright conspiracy theories. Globally, conservative users are the most susceptible to such campaigns.
I will add "at this moment in time". I expect that sufficiently virulent disinfo which targets the left will evolve eventually.
For additional reading, not directly related to lib / con disinfo efficacy - The spreading of misinformation online. https://www.pnas.org/doi/10.1073/pnas.1517441113
This is one of the first papers on this topic I ever read, and will help in the consideration of misinfo / disinfo traffic patterns in a network.
uhh - not that you asked for additional reading.