Skoči na glavni sadržaj

Izvorni znanstveni članak

https://doi.org/10.11613/BM.2017.030201

Dealing with the positive publication bias: Why you should really publish your negative results

Ana Mlinarić orcid id orcid.org/0000-0001-8138-4170 ; Department of Laboratory Diagnostics, University Hospital Centre Zagreb, Zagreb; Research Integrity Editor, Biochemia Medica
Martina Horvat ; Department of Medical Laboratory Diagnostics, University Hospital Split, Split; Research Integrity Editor, Biochemia Medica
Vesna Šupak Smolčić orcid id orcid.org/0000-0002-0539-6513 ; Clinical Institute of Laboratory Diagnostics, Clinical Hospital Center Rijeka, Rijeka; Research Integrity Editor, Biochemia Medica


Puni tekst: engleski pdf 756 Kb

str. 447-452

preuzimanja: 1.002

citiraj

Preuzmi JATS datoteku


Sažetak

Studies with positive results are greatly more represented in literature than studies with negative results, producing so-called publication bias. This review aims to discuss occurring problems around negative results and to emphasize the importance of reporting negative results. Underreporting of negative results introduces bias into meta-analysis, which consequently misinforms researchers, doctors and policymakers. More resources are potentially wasted on already disputed research that remains unpublished and therefore unavailable to the scientific community. Ethical obligations need to be considered when reporting results of studies on human subjects as people have exposed themselves to risk with the assurance that the study is performed to benefit others. Some studies disprove the common conception that journal editors preferably publish positive findings, which are considered as more citable. Therefore, all stakeholders, but especially researchers, need to be conscious of disseminating negative and positive findings alike.

Ključne riječi

negative results; publication bias; research integrity; medical journals

Hrčak ID:

187580

URI

https://hrcak.srce.hr/187580

Datum izdavanja:

15.10.2017.

Posjeta: 5.015 *




Introduction

Studies with successfully proven hypothesis are represented in the literature in a greater amount than studies that “failed” to prove the hypothesis, delivering so-called negative results. It seems that “successful and productive” studies are more interesting, readable and therefore more “valuable” for publishers, editors and readers. This can be derived from the fact that the positive results are more favourably cited in the scientific and medical literature (1,2). The proportion of positive results in scientific literature increased between 1990/1991 reaching 70.2% and 85.9% in 2007, respectively. On average, yearly increase was 6%, and this effect was constant across most of the disciplines and countries (3). Not reporting clinical studies is also evident as merely half of the clinical studies approved by research ethics committee of the University of Freiburg in Germany were published in the form of a full article eight to ten years later, meaning that the other half stayed unpublished and potentially forgotten (4).

Moreover, many scientists have produced studies with no proven hypothesis and regarded them as unimportant, unworthy or simply not good enough for publishing. Scientist Peter Dudek illustratively posted in his Tweeter comment from 2013: “If I chronicled all my negative results during my studies, the thesis would have been 20,000 pages instead of 200.”

So what are negative results? There are three cases of negative results:

  • The study is too small and lacks power. Findings inconclusively suggest no effect.

  • Despite large enough sample and well-planned study, findings clearly suggest no effect.

  • Instead of desired outcome study produces the opposite effect entirely (5).

The first case is the result of poorly planned study and regardless of the study outcome, those should not be published or interpreted in the context of their limitations. The second case is the genuine case of negative result outcome and such studies are well designed and well executed and therefore deserve to be published. Arguably, the third example is not an actual case of negative results as even though the study does not confirm the tested hypothesis, it is still showing a significant, albeit opposite, effect.

Not submitting and publishing studies brings forth many issues regarding ethics, statistics and finances. The aim of this article is to discuss occurring problems around publishing negative results of well-designed and well-executed studies. We report consequences of underreporting negative results to emphasize why all stakeholders - authors, journal editors and publishers alike, should strive for a more accurate interpretation of scientific results.

Problems encountered when submitting and publishing negative results

After tedious work of designing a study, recruiting study participants, obtaining necessary finances, ethics committee approval and patients’ informed consents, collecting samples and performing experiments what happens when statistical analysis of results reveals no statistical significance? The scientist who performed the work understands that behind those ‘’negative’’ results is a great amount of effort, time and resources invested and as such, those results have their value and should be published. Unfortunately, there are challenges that our hypothetical scientist is about to face during the process of submitting the work for publishing, challenges that discourage many from publishing their negative results.

Editors favour studies with positive results

Arguably, many editors prefer to publish positive results, which are considered more intriguing and ultimately are more citable (1,2,6). Multiple analyses showed that papers are more likely to be published, cited and accepted by high-ranking journals if the reported results are positive (1,3,7-9).

Once a manuscript is submitted to medical journals, studies have not found statistically significant differences between publishing positive and non-positive results (6,10-12). The editorial preference of positive studies on drug randomized controlled trials (RCT) was also disputed in the analysis of submitted and published RCTs in eight medical journals and a study examining published anaesthesia research suggesting that the reason for publication bias primarily lies with the authors (11,12).

Authors favour submitting and citing positive results

Now let us consider a more realistic scenario - our hypothetical scientist works in a clinical laboratory, where his responsibilities also entail assessing and providing a desirable analytical quality of test results, interpreting critical results to clinicians, educating students, dealing with laboratory finances, self-educating and inevitably participating in different research projects. How would our scientist deal with the perceivably difficult task of publishing an article with negative results? It is almost reasonable to think that he would, as many authors confessed, consider the effort of writing up an article to be in vain if the rejection was to be expected and would, therefore, choose to publish work with positive results to maximize his output (13).

Some studies state that the publication bias mostly originates when scientists choose not to report their findings as the rejection by the journal is only 6% of all the reasons for non-publishing (14,15). On the contrary, the main reasons reported by the investigators were lack of time and priority, incomplete study, a study not for publication, manuscript in preparation or under review, unimportant or negative results, low study quality, fear of rejection, rejection and others (15).

Data misinterpretation for obtaining “better suited” results

It has been argued that competition in science contributes to misinterpretation and distortion of results (7). In a desire to find a positive outcome of the study, authors can succumb to the pitfalls of focusing on the positive rather than the negative (16). Facing non-significant results authors can decide to tweak the hypothesis to better suit their data, also known as HARKing – hypothesizing after the results are known (17). HARKing entails meticulous examination or complete disregard of the data that does not fit into the tested hypothesis. In addition, there are numerous reports of scientific misconduct where scientists have completely falsified the published data (18).

Obstruction in the dissemination of research results by the interested parties

Another serious problem that researchers face is the deliberate obstruction of the dissemination of research results by the interested parties, such as sponsors and pharmaceutical companies, whose interest is not to make negative findings publicly available. Industry-sponsored RCTs in neurodegenerative diseases, paediatrics, surgery and cardiovascular medicine were less likely to get published compared to academy sponsored studies (19-22).

Consequences of not publishing negative results

Authors often do not think about the aftermath of not publishing negative results and “failed” studies. Logically, the invested money, time and resources go wasted if the study results remain unpublished and unreported. Furthermore, if one scientist has considered a hypothesis was worth exploring, chances are somebody else has had the same or similar idea. Therefore, not publishing a “failed” study can waste other researchers’ time and money in a study that will presumably also produce negative results. Ultimately, this vicious circle leads to personal discouragement and a lot of wastefully spent resources that could have been used to build upon already tested hypothesis.

Clinical trials’ results, especially studies with serious adverse events, often remain unpublished (23). Studies, which remain unpublished for reasons other than having negative results, have an unknown and unpredictable effect on the results of meta-analyses (24). However, it is clear that a positive bias is introduced when studies with negative results remain unreported, thereby jeopardizing the validity of meta-analysis (25,26). This is potentially harmful as the false positive outcome of meta-analysis misinforms researchers, doctors, policymakers and greater scientific community, specifically when the wrong conclusions are drawn on the benefit of the treatment.

Furthermore, human subjects have given their informed consent for participating in a study with the assurance that the research is done to benefit others or to contribute to scientific advancement. These participants exposed themselves to risk and therefore, our moral obligation is to publish the results, no matter the outcome of the study (5).

Unpublished negative results nourish the interests of those who benefit from these results being hidden. Pharmaceutical companies often do not have the incentive to publish negative results of drug investigations. Alarmingly, 60% of clinical trials with findings of inadequate drug efficacy or safety concerns remained unpublished (27). Reporting these results may save time and money for patients and society, as well as prevent possible previously unknown side effects that are usually discovered after new drugs are released (28).

Position of major organizations, funders and journal editors on publishing negative results

The overwhelming issue of publication bias and its effects on the validity of meta-analysis has forced many organizations to tackle this problem by implementing recommendations and mandates. The International Committee of Medical Journal Editors (ICMJE) recommends that journals should require the registration of clinical trials to be eligible for publication. ICMJE believes that there is an ethical obligation to share data generated by such trials: “…to prevent selective publication and selective reporting of research outcomes, and to prevent unnecessary duplication of research effort” (29). World Health Organization (WHO) in 2005 published a Statement on Public Disclosure of Clinical Trial Results in which the registration of clinical trials is mandatory before the first study participant is included as well as strict timeframes on reporting the trial results. The Statement also encourages publishing the results of past unreported clinical trials (30). This is supported by the Consolidated Standards of Reporting Trials Statement, which includes a requirement to register clinical trials at the time of their inception precisely because of the evident underreporting of clinical trials (31). The Declaration of Helsinki clearly states that all contributors, including researchers, authors, sponsors, editors and publishers, are ethically obligated to disseminate the results of research (32). Committee on Publication Ethics (COPE) states that studies with negative results should not be excluded in order to support a debate in science (33). Publication bias is recognized by research funders that consider publishing negative results should be a priority (34).

The effects of publication bias have not gone unnoticed among scientists and clinicians as they reported in an online survey that nearly 70% of researchers were unable to reproduce published results (35). Because of a high rate of published results that were found irreproducible and considering that many results of clinical trials have gone unpublished, researchers have organized an All Trials campaign that promotes the publication of currently unpublished clinical trial studies. This campaign urges all stakeholders to implement measures to achieve publication of such results with the aim of obtaining all evidence about the treatment effect (36).

There are a number of journals whose scope is to publish negative results in the respective science fields to compensate for evident publication bias (i.e., Journal of Articles in Support of the Null Hypothesis, Journal of Negative Results in BioMedicine, Journal of Pharmaceutical Negative Results, Nature Negative Results section). This, however, can introduce bias in favour of negative results which is counterproductive. Criteria for publishing should be the quality of the study and its power, no matter the outcome of the study. The results obtained from a methodologically well-designed study are trustworthy no matter if they confirm or disprove the null hypothesis.

Assessing the validity of negative results

With the emphasis on publishing negative results authors, reviewers and editors need to discern true negative results from those obtained from a poorly designed and executed research. Attention should always be paid to study quality and not prioritize the preferable outcome.

The validity of negative results can be assessed like the sensitivity of a diagnostic test (37). If using a diagnostic test with low sensitivity, a negative result of the test cannot rule out the disease. Likewise, an effect size and confidence intervals of negative results study should be reported to correctly assess the clinical significance of the results. Discouragingly, only 30% of negative studies published in prominent medical journals have reported studies of power and/or sample size calculations (37).

Every time a hypothesis is tested we risk falsely concluding that observed results are not due to chance and that effect exists (type I error; false positive – assessed by alpha statistics) or incorrectly concluding that results are due to chance when in fact the effect is present (type II error; false negative – assessed by beta statistics). Oberhofer and Lennon give emphasis to beta statistics urging the authors to apply the same criteria for detecting a Type II error as a Type I error. Negative results should be interpreted more rigorously to assert that the lack of difference is not due to chance. They state that if we historically accept the 95% confidence that a null hypothesis can be rejected it should also be applied to beta statistics (38).

Conclusions

The studies with findings suggesting no effect or opposite effect are considered negative results (5). It has been shown that proportion of positive results in scientific literature in most disciplines has been increasing in past years which entails a reduction in the proportion of negative results (3). Consequences of leaving negative findings unreported, apart from an unproductive expenditure of time, motivation and resources, are a positive bias in meta-analysis and drawing erroneous conclusions which ultimately present a serious harm for scientific endeavour (25,26). Major organizations recommend and encourage reporting results of all clinical trials in the interest of ethical dissemination of studies and drawing unbiased conclusions (29-33).

Biochemia Medica considers all submitted manuscripts irrespective of the positive or negative findings as long as the manuscript is in the scope of the journal and prepared according to the Instructions to authors. The recommendations on the way the results should be reported for all manuscripts regardless of the positive or negative findings are the same and they comply with the recommendations of all leading organizations. Whether the results are negative or positive, it is recommended to report effect size and confidence intervals. Researchers, journal editors and funders need to be conscious of the importance of negative results and report and support disseminating negative and positive findings alike.

Notes

[1] Conflicts of interest None declared.

References

1 

Duyx B, Urlings MJE, Swaen GHM, Bouter LM, Zeegers MP. Scientific Citations Favor Positive Results: A Systematic Review and Meta-analysis. J Clin Epidemiol. 2017;88:92–101. https://doi.org/10.1016/j.jclinepi.2017.06.002 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/28603008

2 

Jannot AS, Agoritsas T, Gayet-Ageron A, Perneger TV. Citation bias favouring statistically significant studies was present in medical research. J Clin Epidemiol. 2013;66:296–301. https://doi.org/10.1016/j.jclinepi.2012.09.015 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/23347853

3 

Fanelli D. Negative results are disappearing from most disciplines and countries. Scientometrics. 2012;90:891–904. https://doi.org/10.1007/s11192-011-0494-7

4 

Blümle A, Schandelmaier S, Oeller P, Kasenda B, Briel M, von Elm E, et al. Premature Discontinuation of Prospective Clinical Studies Approved by a Research Ethics Committee – A Comparison of Randomised and Non-Randomised Studies. PLoS One. 2016;11:e0165605. https://doi.org/10.1371/journal.pone.0165605 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/27792749

5 

Sandercock P. Negative results: why do they need to be published? Int J Stroke. 2012;7:32–3. https://doi.org/10.1111/j.1747-4949.2011.00723.x PubMed: http://www.ncbi.nlm.nih.gov/pubmed/22188851

6 

Olson CM, Rennie D, Cook D, Dickersin K, Flanagin A, Hogan JW, et al. Publication bias in editorial decision making. JAMA. 2002;287:2825–8. https://doi.org/10.1001/jama.287.21.2825 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/12038924

7 

Fanelli D. Do Pressures to Publish Increase Scientists’ Bias? An Empirical Support from US States Data. PLoS One. 2010;5:e10271. https://doi.org/10.1371/journal.pone.0010271 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/20422014

8 

Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan AW, Cronin E, et al. Systematic review of the empirical evidence study publication bias and outcome reporting bias. PLoS One. 2008;3:e3081. https://doi.org/10.1371/journal.pone.0003081 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/18769481

9 

Murtaugh PA. Journal quality, effect size, and publication bias in meta-analysis. Ecology. 2002;83:1162–6. https://doi.org/10.1890/0012-9658(2002)083[1162:JQESAP]2.0.CO;2

10 

Okike K, Kocher MS, Mehlman CT, Heckman JD, Bhandari M. Publication bias in orthopedic research: An analysis of scientific factors associated with publication in The journal of bone and joint surgery (American volume). J Bone Joint Surg Am. 2008;90:595–601. https://doi.org/10.2106/JBJS.G.00279 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/18310710

11 

van Lent M, Overbeke J, Out HJ. Role of Editorial and Peer Review Processes in Publication Bias: Analysis of Drug Trials Submitted to Eight Medical Journals. PLoS One. 2014;9:e104846. https://doi.org/10.1371/journal.pone.0104846 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/25118182

12 

Chong SW, Collins NF, Wu CY, Liskaser GM, Peyton PJ. The relationship between study findings and publication outcome in anesthesia research: a retrospective observational study examining publication bias. Can J Anaesth. 2016;63:682–90. https://doi.org/10.1007/s12630-016-0631-0 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/27038290

13 

Scherer RW, Ugarte-Gil C, Schmucker C, Meerpohl JJ. Authors report lack of time as main reason for unpublished research presented at biomedical conferences: a systematic review. J Clin Epidemiol. 2015;68:803–10. https://doi.org/10.1016/j.jclinepi.2015.01.027 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/25797837

14 

Song F, Loke Y, Hooper L. Why Are Medical and Health-Related Studies Not Being Published? A Systematic Review of Reasons Given by Investigators. PLoS One. 2014;9:e110418. https://doi.org/10.1371/journal.pone.0110418 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/25335091

15 

Song F, Parekh S, Hooper L, Loke YK, Ryder J, Sutton AJ, et al. Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess. 2010;14(8): https://doi.org/10.3310/hta14080 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/20181324

16 

Teixeira da Silva JA. Negative results: negative perceptions limit their potential for increasing reproducibility. J Negat Results Biomed. 2015;14:12. https://doi.org/10.1186/s12952-015-0033-9 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/26149259

17 

Kerr NL. HARKing: Hypothesizing After the Results are Known. Pers Soc Psychol Rev. 1998;2:196–217. https://doi.org/10.1207/s15327957pspr0203_4 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/15647155

18 

The office of research integrity. Case summaries. Available at:https://ori.hhs.gov/case_summary. Accessed July 3rd 2017.

19 

Stefaniak JD, Lam TCH, Sim NE, Al-Shahi Salman R, Breen DP. Discontinuation and non-publication of neurodegenerative disease trials: a cross-sectional analysis. Eur J Neurol. 2017 Jun 21 [cited 2017 Jul 10]. [Epub ahead of print] https://doi.org/10.1111/ene.13336 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/28636179

20 

Pica N, Bougeois F. Discontinuation and Nonpublication of Randomized Clinical Trials Conducted in Children. Pediatrics. 2016;138(3):e20160223. https://doi.org/10.1542/peds.2016-0223 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/27492817

21 

Chapman SJ, Shelton B, Mahmood H, Fitzgerald JE, Harrison EM, Bhangu A, et al. Discontinuation and non-publication of surgical randomised controlled trials: observational study. BMJ. 2014;349:g6870. https://doi.org/10.1136/bmj.g6870 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/25491195

22 

Roddick AJ, Chan FTS, Stefaniak JD, Zheng SL. Discontinuation and non-publication of clinical trials in cardiovascular medicine. Int J Cardiol. 2017;244:309–15. https://doi.org/10.1016/j.ijcard.2017.06.020 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/28622947

23 

Riveros C, Dechartres A, Perrodeau E, Haneef R, Boutron I, Ravaud P. Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals. PLoS Med. 2013;10:e1001566. https://doi.org/10.1371/journal.pmed.1001566 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/24311990

24 

Hart B, Duke D, Lundh A, Bero L. Effect of reporting bias on meta-analyses of drug trials: reanalysis of meta-analyses. BMJ. 2012;344:d7202. https://doi.org/10.1136/bmj.d7202 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/22214754

25 

Rothstein HR. Publication bias as a threat to the validity of meta-analytic results. J Exp Criminol. 2008;4:61. https://doi.org/10.1007/s11292-007-9046-9

26 

Kicinski M. How does under-reporting of negative and inconclusive results affect the false-positive rate in metaanalysis? A simulation study. BMJ Open. 2014;4:e004831. https://doi.org/10.1136/bmjopen-2014-004831 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/25168036

27 

Hwang TJ, Carpenter D, Lauffenburger JC, Wang B, Franklin JM, Kesselheim AS. Failure of Investigational Drugs in Late-Stage Clinical Development and Publication of Trial Results. JAMA Intern Med. 2016;176:1826–33. https://doi.org/10.1001/jamainternmed.2016.6008 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/27723879

28 

Krleza Jeric K. Sharing of clinical trial data and research integrity. Period Biol. 2014;116:337–9.

29 

International committee of medical journal editors. Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. Available at:http://www.icmje.org/recommendations/. Accessed July 3rd 2017.

30 

Statement on Public Disclosure of Clinical Trial Results. Available at:http://www.who.int/ictrp/results/reporting/en/. Accessed July 3rd 2017.

31 

Schulz KF, Altman DG, Moher D; CONSORT Group. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomized trials. BMC Med. 2010;8:18. https://doi.org/10.1186/1741-7015-8-18 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/20334633

32 

World Medical Association. Declaration of Helsinki Ethical Principles for Medical Research Involving Human Subjects. JAMA. 2013;310:2191–4. https://doi.org/10.1001/jama.2013.281053 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/24141714

33 

Committee on publication ethics. Code of conduct and best practice guidelines for journal editors. Available at:https://publicationethics.org/resources/guidelines. Accessed July 3rd 2017.

34 

Collins E. Publishing priorities of biomedical research funders. BMJ Open. 2013;3:e004171. https://doi.org/10.1136/bmjopen-2013-004171 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/24154520

35 

Baker M. Is there a reproducibility crisis? Nature. 2016;533:452–4. https://doi.org/10.1038/533452a PubMed: http://www.ncbi.nlm.nih.gov/pubmed/27225100

36 

All trials registrated campaign. Available at:http://www.alltrials.net/find-out-more/all-trials/. Accessed on 3 July, 2017.

37 

Hebert RS, Wright SM, Dittus RD, Elasy TA. Prominent medical journals often provide insufficient information to assess the validity of studies with negative results. J Negat Results Biomed. 2002;1:1. https://doi.org/10.1186/1477-5751-1-1 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/12437785

38 

Oberhofer AL, Lennon RP. A call for greater power in an era of publishing negative results. Acta Med Acad. 2014;43:172–3. https://doi.org/10.5644/ama2006-124.118 PubMed: http://www.ncbi.nlm.nih.gov/pubmed/25529524


This display is generated from NISO JATS XML with jats-html.xsl. The XSLT engine is libxslt.