Publication

Article

Pharmacy Practice in Focus: Health Systems

July 2025
Volume14
Issue 4

From Pasteur to Present: Historical and Contemporary Perspectives on Publication Bias in Scientific Literature

Key Takeaways

  • Geison's analysis of Pasteur's work illustrates the selective presentation of scientific data, influencing public perception of research success.
  • Modern biomedical research faces similar challenges with publication bias, often underrepresenting negative or null findings, affecting perceived intervention efficacy.
SHOW MORE

Selective reporting and underrepresentation distort biomedical evidence, affecting pharmacy practice

Gerald L. Geison’s The Private Science of Louis Pasteur provides a historical framework for examining long-standing patterns in scientific communication, particularly the divergence between experimental practice and published narrative. Geison’s findings, which draw on Pasteur’s previously inaccessible laboratory notebooks, offer detailed documentation of how public scientific claims may be shaped by selective presentation of data. Although situated in 19th-century Europe’s scientific milieu, Geison’s observations have relevance to present-day challenges in biomedical publishing, where concerns about publication bias remain a subject of analysis and debate.1 Understanding these dynamics can be essential for pharmacy professionals, who rely heavily on published clinical research findings to inform decisions about therapeutics, drug policy, and patient care.

Pharmacist Scientist in a White Coat Working with Chemicals. | Image Credit: Dennis | stock.adobe.com

Image Credit: Dennis | stock.adobe.com

Geison’s account highlights the constructed nature of the historical scientific record. Pasteur’s published papers often emphasized clear experimental success and linear methodological progression. However, his private notebooks reveal a more complex process characterized by trial and error, methodological changes, and episodes of uncertainty.

A frequently cited example is the 1881 anthrax vaccine demonstration at Pouilly-le-Fort, which Pasteur publicly described as a success resulting from oxygen attenuation techniques. However, his notebooks indicate that he employed a potassium bichromate–based method instead, an approach first developed by his competitor, Jean Joseph Henri Toussaint. Geison suggests this discrepancy may reflect Pasteur’s desire to maintain control over the narrative of his discoveries and uphold his standing as a leader in his scientific community.1

Such examples do not necessarily imply misconduct, but they underscore how the framing of scientific evidence can influence interpretation. Geison characterizes these disparities as part of a broader scientific culture in which the clarity of a public demonstration or publication may obscure the complexity of underlying research processes.1 For contemporary readers, these accounts serve as an early instance of selective disclosure in scientific communication, a phenomenon that remains under examination in modern clinical research reporting.

Recent literature in the biomedical sciences has addressed publication bias as an ongoing concern, particularly regarding the underrepresentation of studies with negative or null findings. For this article, negative findings refers to the results of scientific studies or experiments that do not show a statistically significant effect, fail to confirm a hypothesis, or indicate that a tested intervention does not work as intended. In contrast, null results refers to when the study fails to reject the null hypothesis. In their editorial in the Journal of Psychiatry & Neuroscience, Joober et al outline how the pattern of underrepresentation of negative or null findings may influence the perceived efficacy of medical interventions. The authors reference meta-analyses of antidepressant trials that retroactively included unpublished data accessed through the US Freedom of Information Act. When these data were combined with previously published results, the overall effect size of selective serotonin reuptake inhibitors (SSRIs) was substantially reduced, suggesting that the exclusion of negative findings had contributed to an overestimation of therapeutic value, which may have supported the SSRIs’ FDA approval and ultimate market availability.2

A complementary review by Montori et al in Mayo Clinic Proceedings further defines publication bias as the selective publication of studies based on the direction and magnitude of their results, particularly those without statistical significance. Data from these studies—termed negative findings—are less likely to be published, which can result in a skewed perception of an intervention’s efficacy when systematic reviews pool only the available published data. The authors emphasize that studies with positive data may be up to 3 times more likely to be published than studies with neutral or unfavorable outcomes and that publication bias may be particularly pronounced in observational research compared with randomized controlled trials.3

Joober et al further discuss structural factors influencing publication bias in their editorial. They note that researchers are often more likely to submit positive findings and that these are more likely to be accepted by journals, reviewed favorably, and cited in future publications. These tendencies may stem from academic and financial pressures, such as competition for funding or concerns relating to the drug regulatory review process. Journals may also prioritize positive findings due to perceived relevance, reader interest, or the potential for increased impact factor. According to findings of a study cited by Joober et al, the frequency of papers declaring significant statistical support for their a priori formulated hypotheses increased by 22% between 1990 and 2007 (n = 4656; P < .001). Specifically, psychology and psychiatry were among the disciplines where this increase was observed to be highest (P < .001).2 Notably, in the field of biomedical research in autism spectrum disorder (ASD), negative results appear to be nearly absent, according to Joober et al. Further, they note that in the 10 years before publication of the editorial in 2012, more than 89% of 437 studies in the biomedical research fields of immune dysregulation/inflammation, oxidative stress, mitochondrial dysfunction, and toxicant exposure reported a significant association between ASD and 1 or more parameters investigated, with 100% of 115 studies on oxidative stress reporting positive results.2

Ultimately, the underrepresentation of negative findings may influence the overall reliability of the scientific literature, say Joober et al. They present a statistical analysis suggesting that under typical conditions in biomedical research, where the a priori probability of a tested hypothesis being true is low and sample sizes may be limited, the predictive value of a negative result may exceed that of a positive result. In this view, negative findings, when methodologically sound, may contribute more reliably to knowledge accumulation than positive findings underpowered or driven by low-probability hypotheses.2

Nair, writing in the Indian Journal of Anaesthesia, offers a complementary perspective, focusing on the perceived status of negative results among researchers and editors. He describes publication bias as the failure to publish data from studies based on the statistical significance or direction of their findings and suggests that this bias may lead to the exclusion of methodologically valid studies that fail to achieve a predetermined outcome. He identifies potential causes such as editorial decisions, lack of interest in revision by authors, and assumptions that nonsignificant findings represent failed research rather than valid contributions to knowledge.4

Nair outlines several categories of negative results. These include conclusive negative results (eg, findings that indicate no effect or an adverse outcome), exploratory negative results derived from secondary or post hoc analyses, and inconclusive results from underpowered studies. Although the first category is sometimes published, particularly in human clinical trials involving regulatory oversight, the latter 2 categories are less frequently represented in the literature. According to Nair, the absence of these studies may result in the loss of potentially valuable data and contribute to inefficiencies in research investment and duplication of effort.4

Both Joober et al and Nair discuss the implications of publication bias for systematic reviews and meta-analyses. These methods rely on comprehensive data collection across studies to estimate effect sizes and inform clinical guidelines. If negative findings are underreported or unavailable, estimates may be artificially inflated. Joober et al describe how repeated positive publications can increase belief in an effect without corresponding support from a complete data set—a dynamic that complicates the interpretation of hypothesis testing and may reduce the reliability of conclusions drawn from meta-analyses.2,4

The examples by Geison, Joober et al, and Nair do not suggest uniform misrepresentation or deliberate distortion; rather, they highlight systemic features of scientific communication that may unintentionally favor certain outcomes over others. The result is a scientific literature shaped not only by the quality of evidence but also by the mechanisms and incentives that govern its dissemination.

Rouan et al in the Journal of Vascular Surgery further expanded the scope of the discussion by highlighting how a lack of diversity can also be a significant contributor to publication bias.5 For instance, in Parkinson disease research, Gilbert and Standaert found that certain racial and ethnic groups, as well as women, were historically underrepresented, leading to disparities in diagnosis and access to expert care.6 In this case, the lack of representation translated into a tiered treatment system that disadvantaged these patient populations.

In another example, Patel et al conducted a metaanalysis of studies on nonalcoholic fatty liver disease and discovered that fewer than half the trials reported race or ethnicity and that the proportion of Hispanic patients included was substantially lower than the actual disease burden in that population.7 Such gaps suggest that even large-scale studies may fail to produce findings applicable to all affected groups.

The implications of underrepresentation extend into pediatric care as well. Natale et al identified racial and ethnic disparities in recruitment for a pediatric critical care trial, with Black and Hispanic families being approached less often for consent and declining more frequently when approached.8 These findings imply that structural factors in trial design and recruitment may contribute to uneven participation, influencing the applicability of published results.

Sex bias in research has also resulted in patient harm. For example, women have been shown to experience higher rates of adverse drug reactions than men, with evidence indicating that 80% of FDA drug withdrawals were linked to harmful effects in women.5 Prakash et al found persistent sex bias across phase 1 through 3 clinical trials despite National Institutes of Health mandates for inclusion, and Feldman et al reported underrepresentation of women across 7 of 11 disease categories over a 25-year review of published trial data.9,10 These omissions can ultimately lead to the approval of therapies that may be less safe or less effective for women.

Additional disparities occur when data are aggregated without subgroup analyses. Studies from the US Department of Veterans Affairs, for instance, typically consist of cohorts that are over 90% male but inform general treatment recommendations. Without disaggregated reporting, researchers may miss differential outcomes by sex, race, or age, obscuring potential harms or benefits for excluded groups.5

Furthermore, sex-inclusive research has been associated with greater scientific impact. Xiao et al found that publications including sex-based statistical analysis, reporting, and discussion received significantly more citations than those that did not.11 This suggests that research reflecting diversity principles is more applicable and valued within the scientific community.

Rouan et al also draw attention to how a lack of diversity among editorial boards and reviewers may perpetuate this form of bias. Female and minority authors remain underrepresented in first or senior authorship roles, invited commentaries, and citations. The authors suggest that editorial teams that lack diversity may overlook implicit bias in submitted research, leading to the publication of data from studies that inadequately reflect the broader patient population. Moreover, when peer reviewers or editors unconsciously favor manuscripts written by individuals of similar backgrounds, this phenomenon—homophily—can skew publication trends toward majoritydominated perspectives.5

Montori et al add that publication bias can intrude at virtually every phase of the research and publication process—from study design and data collection to journal selection, editorial review, and database indexing. For example, researchers may choose not to submit negative study data due to a lack of time or perception of its lack of importance, whereas editors may give such study data lower priority or reject them on the grounds of limited novelty. Even if published, such findings may be placed in nonindexed or lowervisibility journals, reducing their accessibility for future systematic review. This progression of decisions, Montori et al notes, increases the likelihood that systematic reviews based only on indexed, published data will reflect a distorted view of intervention efficacy.3

Nair highlights statistical techniques intended to detect and quantify potential bias in metaanalyses, including funnel plot asymmetry, Egger regression, and Rosenthal fail-safe N. Although these tools may indicate the presence of bias, they cannot restore omitted data or fully correct for their absence. Nair notes that the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement includes a requirement to report planned assessments of publication bias. However, its effectiveness depends on the availability of published negative results, which structural and cultural barriers may still limit.4

The broader consequences of these dynamics are increasingly recognized in discussions about research reproducibility and transparency. Joober et al cite analyses by Ioannidis and others who have argued that the cumulative effect of low power, low prior probability, and multiple forms of bias—including selective reporting—may render a proportion of published biomedical findings unreliable. Although this view has generated debate, it has also prompted further investigation into how evidence is generated, evaluated, and communicated.2

Geison’s account of Pasteur’s work in 19th-century Europe also aligns with this line of inquiry by showing how even celebrated scientific contributions may be influenced by selective framing. In Pasteur’s case, this framing occurred through public lectures, publications, and curated experimental demonstrations. These narratives emphasized clarity and success, omitting failed attempts or methodological ambiguities recorded in his notebooks. Geison characterizes this approach as part of a broader scientific culture in which success is more readily valorized than uncertainty. Ultimately, Geison’s findings help to underscore the importance of considering both published and unpublished data, or public and private science when assessing the credibility of scientific claims—even historically foundational claims by leaders of the scientific community whohave helped to shape the trajectory of the field positively.1

The implications for pharmacy professionals on publication bias in the scientific literature are multifaceted. Clinical pharmacists, formulary review board members, regulatory reviewers, and research pharmacists rely on published literature to assess medical therapies’ efficacy, safety, and costeffectiveness. When the literature disproportionately reflects positive findings, the resulting evidence base may overstate benefits and underrepresent limitations or risks. This has practical consequences for patient care, policy development, and the evaluation of novel interventions.

Although various initiatives have sought to address these concerns, such as trial registries, open-access platforms, and reporting guidelines, the cited literature notes that challenges remain. Joober et al point to ongoing issues in ensuring access to negative findings from exploratory and animal studies, which may not be subject to registration requirements.2 Moreover, concerns about journal impact, funding models, and research incentives continue to affect what is submitted and published. As such, understanding the context in which scientific knowledge is produced and disseminated is vital for those interpreting biomedical literature. Furthermore, published data from research investigating the impact and effect of publication bias on the practice of medicine are limited, which may hint toward a need for additional inquiry. Notably, concerns regarding the publication opportunities available for such research may present another cyclical challenge in addressing publication bias. When the challenge is present within the very mechanism by which knowledge is shared, unearthing flaws in the system can appear futile or self-defeating.

The historical observations provided by Geison, in conjunction with the contemporary analyses presented by Joober et al, Nair, Montori et al, Rouan et al, and others, provide a narrative about the complex relationship between scientific data, publication practices, and knowledge formation. These perspectives suggest that a combination of methodological, institutional, and cultural factors shapes the visibility of research findings within the scientific literature. Efforts to reduce publication bias may be strengthened by addressing equity, representation, and inclusivity across all stages of the publication process. For pharmacy professionals, awareness of these potentially influential factors in The historical observations provided by Geison, in conjunction with the contemporary analyses presented by Joober et al, Nair, Montori et al, Rouan et al, and others, provide a narrative about the complex relationship between scientific data, publication practices, and knowledge formation. These perspectives suggest that a combination of methodological, institutional, and cultural factors shapes the visibility of research findings within the scientific literature. Efforts to reduce publication bias may be strengthened by addressing equity, representation, and inclusivity across all stages of the publication process. For pharmacy professionals, awareness of these potentially influential factors in the publication of scientific literature can inform more critical engagement with clinical evidence and support efforts to promote transparency and balance in scientific communication.

REFERENCES
  1. Geison GL. The Private Science of Louis Pasteur. Princeton University Press; 1995.
  2. Joober R, Schmitz N, Annable L, Boksa P. Publication bias: what are the challenges and can they be overcome? J Psychiatry Neurosci. 2012;37(3):149-152. doi:10.1503/jpn.120065
  3. Montori VM, Smieja M, Guyatt GH. Publication bias: a brief review for clinicians. Mayo Clin Proc. 2000;75(12):1284-1288. doi:10.4065/75.12.1284
  4. Nair AS. Publication bias–importance of studies with negative results! Indian J Anaesth. 2019;63(6):505-507. doi:10.4103/ija.IJA_142_19
  5. Rouan J, Velazquez G, Freischlag J, Kibbe MR. Publication bias is the consequence of a lack of diversity, equity, and inclusion. J Vasc Surg. 2021;74(2):111S-117S. doi:10.1016/j.jvs.2021.03.049
  6. Gilbert RM, Standaert DG. Bridging the gaps: more inclusive research needed to fully understand Parkinson’s disease. Mov Disord. 2020;35(2):231-234. doi:10.1002/mds.27906
  7. Patel P, Muller C, Paul S. Racial disparities in nonalcoholic fatty liver disease clinical trial enrollment: a systematic review and meta-analysis. World J Hepatol. 2020;12(8):506-518. doi:10.4254/wjh.v12.i8.506
  8. Natale JE, Lebet R, Joseph JG, et al; Randomized Evaluation of Sedation Titration for Respiratory Failure (RESTORE) Study Investigators. Racial and ethnic disparities in parental refusal of consent in a large, multisite pediatric critical care clinical trial. J Pediatr.2017;184:204-208.e1. doi:10.1016/j.jpeds.2017.02.006
  9. Prakash VS, Mansukhani NA, Helenowski IB, Woodruff TK, Kibbe MR. Sex bias in interventional clinical trials. J Womens Health (Larchmt). 2018;27(11):1342-1348. doi:10.1089/jwh.2017.6873
  10. Feldman S, Ammar W, Lo K, Trepman E, van ZuyleM, Etzioni O. Quantifying sex bias in clinical studies at scale with automated data extraction. JAMA Netw Open. 2019;2(7):e196700. doi:10.1001/jamanetworkopen.2019.6700
  11. Xiao N, Mansukhani NA, Mendes de Oliveira DF, Kibbe MR. Association of author gender with sex bias in surgical research. JAMA Surg. 2018;153(7):663-670. doi:10.1001/jamasurg.2018.0040

Newsletter

Stay informed on drug updates, treatment guidelines, and pharmacy practice trends—subscribe to Pharmacy Times for weekly clinical insights.

Related Videos