- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
ghost archive | Excerpts:
… findings with null or negative results — those that fail to find a relationship between variables or groups, or that go against the preconceived hypothesis — gather dust in favour of studies with positive or significant findings. A 2022 survey of scientists in France, for instance, found that 75% were willing to publish null results they had produced, but only 12.5% were able to do so2. Over time, this bias in publications distorts the scientific record, and a focus on significant results can encourage researchers to selectively report their data or exaggerate the statistical importance of their findings. It also wastes time and money, because researchers might duplicate studies that had already been conducted but not published. Some evidence suggests that the problem is getting worse, with fewer negative results seeing the light of day3 over time.
At the crux of both academic misconduct and publication bias is the same ‘publish or perish’ culture, perpetuated by academic institutions, research funders, scholarly journals and scientists themselves, that rewards researchers when they publish findings in prestigious venues, Scheel says.
But these academic gatekeepers have biases, say some critics, who argue that funders and top-tier journals often crave novelty and attention-grabbing findings. Journal editors worry that pages full of null results will attract fewer readers, says Simine Vazire, a psychologist at the University of Melbourne in Australia and editor of the journal Psychological Science.
One of the most significant changes to come out of the replication crisis is the expansion of preregistration (see ‘Registrations on the rise’), in which researchers must state their hypothesis and the outcomes they intend to measure in a public database at the outset of their study (this is already the norm in clinical trials). … Preliminary data look promising: when Scheel and her colleagues compared the results of 71 registered reports with a random sample of 152 standard psychology manuscripts, they found that 44% of the registered reports had positive results, compared with 96% of the standard publications7 (see ‘Intent to publish’). And Nosek and his colleagues found that reviewers scored psychology and neuroscience registered reports higher on metrics of research rigour and quality compared with papers published under the standard model8.
They shouldn’t be published in journals at all, any solution that tries to make journals better rather than eliminating them in favour of a free online federated database isn’t addressing the root issue.
I misinterpreted your first sentence… until I read the rest of your comment.
I thought you were saying null results shouldn’t be published. Hackles go up. Keep reading angrily. Ohhhh… ALL results should be publicly available! We’ll that’s very different!
I do have a nitpick, though: if the internet has taught us nothing else, it is that all kinds of scammers, influencers, conspiracy theorists, deniers, and exploiters will ALL post lies and disinformation in any unvetted space they can find. Somebody has got to do some curation and somebody has to pay them enough to ensure that work gets done.
Don’t you need to pay reviewers, though? Open access is great, but totally free seems unsustainable.
Peer reviewers don’t get paid under the current system though, nor do the researchers. Just the journals get paid, for providing a platform to take advantage of everyone else’s hard work.
Shit, really? Why does anybody sign up to do it then?
But they do exist, and while it’s great to be optimistic about a future in which they don’t exist, it’s also counterproductive to advocate against a better future which is much more likely to exist.
How about, in addition to attempting to publish null results in existing journals, you also publish them in free online federated databases? Or better yet, work to establish a federated database which focuses on publishing null results to serve as a repository for articles which seem to struggle with getting published, so that scientists can draw upon it as a useful resource.