Interdisciplinary Centre for Ethics, Department of Philosophy
Jagiellonian University
email: elena.popa@uj.edu.pl
Recent philosophical investigations on the topic of vaccination research and policy have highlighted the importance of public trust for the success of vaccination programs. This paper will look at the problem of expert disagreement and trustworthiness in the case of vaccine research. Giubilini, Gur-Arie and Jamrozik (2025) have argued for a notion of trustworthiness involving virtues of experts, such as epistemic humility and transparency in order to deal with situations of uncertainty, when there is a minority of scientists disagreeing with the majority consensus. Using this account as a starting point, I will investigate the conditions necessary for the trustworthiness of institutions involved in vaccine research under uncertainty. I will highlight that, in addition to the virtues of individual researchers, certain structural features of the research community are required. More specifically, I will make the case for conditions that enable a fair resolution of disagreement: involving all relevant members of the scientific community and seeking solutions that adequately respond to the public’s needs and interests.
Keywords: vaccine research, trust in science, institutional trust, science and values, uncertainty, disagreement
Recent philosophical investigations of vaccination research and policy have highlighted the role of public trust for successful public health interventions, particularly vaccination programs. Maya Goldenberg’s work on vaccine hesitancy has criticized framings of the phenomenon that pit experts against an ignorant public and place the burden on the public to gain additional knowledge about vaccination.1 According to Goldenberg, vaccine hesitancy is a consequence of the decrease in public trust stemming from the experts’ often dismissive attitudes towards concerns raised by members of the public. Vaccine hesitancy has also been discussed in the context of public engagement and argumentation strategies with emphasis on communication processes in addition to exchange of content.2 Giubilini, Gur-Arie and Jamrozik (2025) look at the trustworthiness of experts in the context of vaccine research. Looking at the case of COVID-19 vaccination for children, they argue that expertise is more than an epistemic notion, requiring trustworthiness, which in turn demands that one acknowledges disagreement and uncertainty.
In this paper, I will extend the investigation to conditions for the trustworthiness of institutions involved in vaccine research under conditions of uncertainty. While Giubilini et al. are discussing how disagreement from a minority of scientists can be handled through individual virtues of experts, such as epistemic humility, I will look into features of the research community required for trustworthiness. This is particularly important in the case of vaccination given that vaccine hesitancy has been shown to have a stronger link to institutional distrust than to interpersonal one.3 Philosophical approaches have also highlighted the importance of institutional trust, particularly in healthcare systems, for understanding vaccine hesitancy.4 Looking at the case of expert disagreement more specifically, I will argue that the conditions for trustworthy institutions include the ability to provide a fair resolution to disagreements in cases where the evidence available is inconclusive (i.e., under uncertainty). This is enabled by two features: including input from all relevant members of the scientific community (even those that disagree with the majority consensus) and seeking solutions that adequately respond to the public’s needs and interests. To clarify, by ”public” I mean non-experts that stand to be affected by scientific decisions on, e.g., classifying a certain vaccine as safe. Goldenberg also refers to the term “publics,” used in science and communication studies to emphasize the plurality of interests and perspectives. As my approach is philosophical, I will use “public” while also highlighting the plurality of views and interests at play. I will further show how my proposed approach can help answer counterarguments raised against transparency more broadly. One such counterargument holds that if the public views science is as a value-free enterprise that does not deal with uncertainty or disagreement, being transparent about the said issues may end up further decreasing public trust (John 2018). A focus on institutional conditions for trustworthiness, which are compatible with virtues of individual researchers, can further be used to shape the public understanding of science in a way that moves away from consensus and certainty towards acknowledging disagreements, which are resolved in a fair manner. Section 2 will introduce the argument regarding trustworthiness and acknowledging expert disagreement as well as contrast it with morally thick notions of trust in the philosophy of science that are not necessarily defining features of expertise. Section 3 will discuss the conditions for trustworthy institutions in the context of vaccine research and discuss the objection from the public understanding of science.
Explaining how expert disagreement is relevant for trustworthiness requires spelling out the concept of expertise first. Giubilini et al. rely on a morally “thick” notion of expertise which incorporates trustworthiness. “Morally thick” stands for a notion of expertise that in addition to epistemic requirements such as specialist knowledge, certification by relevant institutions, or professional recognition also contains normative (particularly, moral) requirements, such as being motivated by the right kind of reasons or making a commitment towards others. In the same vein, while Giubilini et al. take trustworthiness to be reliability, they also acknowledge certain moral components. To make clear what is at stake, I will now overview each of these concepts.
Concerning expertise, Giubilini et al. argue against views presenting it as purely epistemic, such as having true or reliable beliefs in a particular domain. Instead, they propose a definition of expertise as holding that “those with expert authority are those we have stronger, or sufficiently strong reasons to trust.”5 The authors further distinguish internal legitimization (i.e., within one’s field) and external legitimization (i.e., public trust). This view is articulated in the broader context of the epistemic dependence of the public on experts: in questions important to members of the public, such as those regarding vaccine safety, individuals do not have either the epistemic or material resources to acquire the relevant knowledge on their own and have to defer to the scientific community.
This notion of expertise discussed is strong, which makes it vulnerable to counterexamples. For instance, one could point to medical professionals involved in torture6: they can hardly be perceived as trustworthy by the public, who would likely condemn their actions, but they are reliable in pursuing nefarious goals and have medical knowledge and abilities that have been subject to internal legitimization. This example brings about an important distinction: whether expertise is used to meet public needs or for morally questionable goals followed by those in positions of power. Adopting a morally thick notion of expertise along the lines above has the consequence of not counting the latter case as expertise. At this point I should clarify that my goal here is not to delve deeper into issues regarding how to define expertise and whether trustworthiness is required for it. Rather, my intention is to broaden up the space for discussing concerns about expertise, trust, and moral considerations. Thus, if the concept of expertise above may strike some as too strong, it is still possible to incorporate the moral considerations at the level of trust as a relation between experts and the public, without taking trustworthiness to be a requirement for expertise.
Relevant for the latter option are morally thick notions of trust that have been discussed in connection to various arguments regarding science and values.7 Briefly put, such notions of trust emphasize that epistemic reliability is not sufficient for trust as a relation holding between the public and scientists and that additional conditions, particularly regarding values, are needed. Examples of such requirements for trust include alignment between the values of the public and those followed by the scientists or having the public’s values and interests represented in the scientific enterprise. In medical context, reliance has also been spelled out as patients needing the doctors in matters of medical knowledge and technical skills, while trust further requires that doctors care about their patients, which ensures a sense of community.8 Interestingly, when introducing their view, Giubilini et al. hold that tying expertise to trust is less subject to controversy than defining trustworthiness in a morally thick way.9 It should be noted, however, that they also include moral components to trustworthiness, namely honesty and transparency. Still, this is less demanding than asking for alignment or representation of the public’s values. A distinction can now be drawn between two places where trustworthiness may come in: as a requirement for expertise (the Giubilini et al. route) or as requirement for cohesion between the scientists and the public (a route enabled by morally thick concepts of trust.10 Giubilini et al.’s call for transparency about uncertainty and expert disagreement can be connected to either one of these claims. Rather than taking sides, my aim here is to highlight that considerations on acknowledging uncertainty and disagreement need not be tied to a strong concept of expertise.
It is quite likely, however, that those disagreeing with one of the claims regarding trustworthiness above may also disagree with the other. The common concern is the following: why should trustworthiness even be under discussion, shouldn’t experts simply focus on finding the truth? The answer here is that in the case of vaccine research, for recommendations regarding vaccination to work, a threshold of trust is needed to ensure the participation of the public. When such programs fail because of issues related to trust, this casts doubt onto the effectiveness of the said expertise or science. A broader case can be made for trustworthiness as a condition for the success of public health as a whole or at least public health interventions more broadly, but that is beyond my purposes here.11
Moving on to questions of uncertainty and disagreement, Giubilini et al. start from the case of the recommendations of COVID-19 vaccines for children in September 2021 despite disagreement regarding whether it would have been better to collect more data on their side effects first. The authors argue for considering disagreement even when it comes from a minority of researchers, spelling out the problem by analogy with equipoise in research ethics. Equipoise cases involve determining whether a patient should be involved in a clinical trial when it is uncertain whether it would yield better results than the standard approach. While this issue was initially discussed in the context of decisions by individual clinicians, more recent approaches look at research communities (Fried 1974; London 2020). Giubilini et al. point out that even when there is expert consensus on equipoise, such as that of administering the vaccine to children because it is judged to be beneficial, there may be a minority of researchers that think it may be harmful or neutral. The authors’ proposal is that both the initial uncertainty and the disagreement by the minority of researchers regarding equipoise, i.e., whether a vaccine would be beneficial versus harmful or neutral for children, be acknowledged. Doing this could have enabled the collection of additional data especially considering, e.g., the lower risk associated with contracting COVID-19 in children. Further specifying this, the authors refer to epistemic humility, honesty, and transparency as characteristics that would enable the experts involved to deal with uncertainty and disagreement.
An additional concern arises here: although Giubilini et al. touch upon some structural features, such as the need to move beyond a “majority rules” view when dealing with disagreement, these characteristics mainly apply to individual experts working in research groups. If one accepts this upshot, one may inquire about further institutional conditions for trustworthiness, which, as has been mentioned above, is particularly relevant in the context of vaccination with the vaccine hesitancy strongly linked to institutional, as opposed to interpersonal, distrust. This implies that increasing trust in vaccination programs is not only a matter of whether the public thinks that particular scientists are trustworthy, but also whether the institutions involved in vaccine research and policymaking are trustworthy.12 I will now explore ways of dealing with disagreement from an institutional point of view.
Before presenting my account there is another potential worry to address: is it appropriate to talk about trustworthiness when discussing institutions? What if trust is a relation that only holds between individuals? My reply here is that while trusting groups or institutions has not been at the center of earlier analyses of trust, recent work has laid out grounds for trust in institutions.13 Moreover, in recent work trust in science has been framed as a relation between groups and institutions.14 Without going into detail, I henceforth refer to institutional trust in terms analogous to the discussion of expert trustworthiness from the previous section: reliability plus a set of moral features. This will prove to be morally thicker than the notion of trustworthiness employed by Giubilini et al. with regard to experts.
Let us now suppose scientists are transparent about there being disagreement from a minority of researchers regarding whether to deem the COVID-19 vaccines safe for children or to keep on gathering evidence. Would that make the scientific community trustworthy? Here, I am leaving aside questions about the quality of scientific communication and its accessibility to the public and assume that being trustworthy would also entail being trusted.15 Under such assumptions it is reasonable to point out that, while appreciating the openness, the public may be concerned about how this disagreement will play out and may not as yet deem the scientific community trustworthy. To put it another way, the public may not only be interested in knowing about the presence of uncertainty and disagreement, but also about how these will be addressed. This is the point where institutional structures come in, ensuring a fair resolution of the disagreement. Fairness is particularly relevant for trustworthiness because under uncertainty, when empirical evidence alone cannot solve the disagreement, social and political values are needed. More specifically, fairness can counter legitimate distrust arising when science is complicit to injustice, which has been discussed in the literature on trust.16 My focus here is on fairness in dealing with disagreement within the scientific community, while the broader conditions for justice are beyond the scope of this paper. Still, given that values will come in when discussing uncertainty, my argument should be read as putting together practical recommendations stemming from science (particularly vaccine research) coupled with societal goals rather than looking only at epistemic concerns within science.
Without seeking to provide an exhaustive account, I introduce two conditions that institutions involved in vaccine research should meet for trustworthiness in cases of dealing with disagreement. They can be expanded or supplemented for related cases, but that would be beyond the purposes of this paper. The conditions are as follows: (i) ensuring that input is considered from all relevant members of the scientific community and (ii) seeking solutions that adequately respond to the public’s needs and interests.
The former requirement refers to how the members of the scientific community interact with one another. Giubilini et al. rightly point out that when uncertainty and disagreement are not acknowledged, some scientists may choose to stay silent or may be pressured to do so. But even when disagreeing voices are acknowledged, there is a question whether all the relevant people have a place at the table and whether their views are given due consideration. This goes beyond the case against simply following a “majority rules” view. When dealing with disagreement, scientifically plausible input should play a role in the decision-making process regardless of whether it comes from a minority of scientists or from scientists that occupy lower positions in hierarchies of power and/or prestige within or across the relevant disciplines. Looking at public health interventions during the COVID-19 pandemic beyond vaccination programs, the neglect of expertise from social sciences is one notable example of the latter.17 Input from social scientists could have helped predict the difficulties particular vulnerable groups faced in complying with the social distancing mandates or the increase in inequality. While there are multiple explanations for the marginal involvement of social sciences, one is in terms of disciplinary hierarchies, where epidemiological and quantitative approaches more broadly were prioritized. A requirement of taking into account input from the relevant members of the scientific community, within as well as beyond the specific research area would help counteract such problems, at least in part.
There is significant overlap between this requirement and Longino’s discussion of objectivity involving diversity of values among the scientific community and openness to transformative criticism in a setting that comprises shared norms and equality of intellectual authority.18 Not taking into consideration the views of vaccine researchers that called for gathering more evidence regarding side effects can be explained as a failure to accept transformative criticism from the part of the scientific community. Yet, for problems that go beyond a well delimited area of expertise, a question arises regarding how to connect different research communities.19 For instance, framing the vaccine safety problem according to equipoise as Giubilini et al. do would involve communities of both biomedical scientists and bioethicists. Thus, one challenge for the requirement that broader input be considered are the divergent norms and aims of different disciplines and a more complex account that transformative criticism is needed.
The answer to this is to focus discussions comprising scientists working in different areas on specific questions – in this case, when or whether the COVID-19 vaccines should be deemed safe for children – as well as making transparent the norms and aims of their disciplines. This may not be straightforward, as often these norms are implicit, but they can become transparent, for instance, when debating opposing views with other scientists.20 The upshot here is that when there is high variation in terms of the norms and aims underlying scientific fields, the focus should be on the specific problem and with as much specification as possible of where the scientists participating in the discussion are coming from.
Another challenge concerns how to filter out input from bad faith actors, that is, participants whose purpose is to obstruct the process, to silence good faith participants, or to cast doubt upon reasonable points of agreement. Bad faith actors can be groups outside of science with particular interests (e.g., industry), but also, relevant for the point here, members of the scientific community (see Schüklenk 2025). With regard to vaccination specifically, a good example is the case of Andrew Wakefield, who fabricated evidence to support the claim that MMR vaccines cause autism and engaged in public campaigns against vaccination. The objection holds that if one includes all views, how can harms stemming from claims made by bad faith actors such as Wakefield be countered? The answer here is to look further into institutional epistemology. Work on scientific collaboration and epistemic dependence has highlighted the role of social and moral values that enable research as a joint enterprise, such as those from research ethics.21 As Rolin puts it, these values are “woven into the epistemic fabric of scientific collaboration.”22 Thus, while highlighting that all members of the scientific community have a place at the table, views that go against the epistemic standards of the community (such as fabricating data) should be excluded.23
A similar treatment can be applied to positions that only seek to obstruct rather than to contribute to the research purposes or, more broadly, to the public good.24 The challenge for these latter cases is to distinguish genuine expressions of concern from cases where doubt is used to prevent urgent action. Examples include the role of the tobacco industry in propagating doubt regarding the scientific evidence for the negative health effects of smoking25, or cases of “paralysis by analysis,” when endless technical debate about policy is used to prevent any policy response.26 Telling apart these kinds of cases from those where doubt is justified can be done by referring to particular scientific aims or aims more broadly related to the values of the public. Going back to the example of COVID-19 vaccination for children, stating specific concerns about gathering more evidence about side effects and new standards to prevent potential harms of premature rollout are different from overestimating uncertainty and indefinitely postponing decision-making and action. The broader point here is that doubt can serve different purposes, some harmful, such as those mentioned above, but also some that help draw attention to other issues relevant to public trust.27
Another issue is how to counter the marginalization or exclusion of perspectives from scientists who have less power or speak against groups that have more power or resources. The point to note here is that this issue is more systemic, featuring not only scientific institutions but also funding bodies or industry actors. I acknowledge the relevance of relations to other institutions and the need for even more systemic approaches to prevent existing power hierarchies in society from spilling over into interactions within scientific communities. At the same time, my arguments are meant to provide a starting point for thinking about trustworthiness at the level of scientific institutions rather than address issues on higher levels of generality. A wider perspective on science as part of democratic society and how to overcome persistent patterns of injustice can be provided by drawing on recent strands of work on democratic theory singling out the role of institutions in distortions of deliberative processes.28 Thus, overall, taking into account input from relevant members of the scientific community requires careful consideration of who is a good faith actor but also of who may be prevented from speaking up due to pre-existing unjust power structures.
Nevertheless, even after addressing these challenges, making sure that scientists interact in ways that are more likely to yield a fair approach to disagreement and uncertainty, there is no guarantee that there will be agreement on a solution. Furthermore, identifying bad faith actors or cases of silencing may require looking beyond values essential for the scientific community to work together. Here is where the second component comes in, namely how the scientific community relates to the needs and interests of the public. The condition is that when dealing with uncertainty, solutions that better align with the public’s needs and interests should be prioritized. This condition departs from Longino’s approach insofar as values play a further role in guiding the research, beyond their endorsement by particular scientists engaging in transformative criticism.29 The point can be partly spelled out by drawing on work on inductive risk in the philosophy of science and partly by looking deeper into issues of values and pluralism.
Briefly put, cases of inductive risk involve scientific decisions under uncertainty and given the probability of the scientific judgment being wrong, choosing which kinds of errors are preferable within the specific context. In the science and values literature inductive risk provides an example of a situation where values legitimately influence the scientists’ decisions under uncertainty. Douglas uses the example of investigating dioxin as a potential carcinogen on rats and whether to count uncertain cases as tumors.30 Doing so requires weighing what happens if one is wrong in counting the uncertain cases or in discounting them, which has consequences for how strict the recommended regulations will be. In this particular example, one may point out that stricter regulations, although they may rely on false positives, are better from the perspective of the interests of the public because doing otherwise would increase the risk of people being exposed to a carcinogen. Recent work in the philosophy of science has looked at how to connect the values of the public to scientific research in cases of inductive risk. For instance, Irzik and Kurtulmus argue for value alignment, namely social mechanisms that enable members of the public to provide input to the scientists on how to distribute risks.31 Schroeder goes one step further, holding that such decisions are to be made by members of the public through deliberative democracy exercises and subsequently followed by the scientists.32 The upshot here is that the public is meant to have a say in decisions under uncertainty regarding matters of public interest.
Moving on to the context of vaccine research, the choice is between assessing vaccines as safe with the risk of later discovering new side effects or gathering more evidence about side effects while also delaying the vaccination program. A solution broadly along the lines of alignment with the public’s interests would weigh the benefits children receive from this program (rather small, given their low-risk status in connection to COVID-19 infection), the benefits among the general population (uncertain, as the vaccines were not stopping transmission), and the harms children may suffer due to potential side effects. In this case, searching for more evidence seems better aligned with children’s interests while being neutral with respect to the interests of the overall population. This also fits broader investigations of values and public health, particularly the fair distribution of benefits and burdens of a particular intervention.33 Nevertheless, this solution does raise additional questions regarding disagreement about values. Let us suppose a vaccine that stopped transmission had been available – how to weigh the potential harms to children versus the benefits of lowering transmission rates, especially for more vulnerable groups? Here, other principles may come in, such as giving priority to the interests of those most affected, which aligns with the equipoise framing. Still, other considerations may be relevant, such as lowering transmission rate and thus protecting the most vulnerable, in which case one may recommend that children be vaccinated. This brings about a further challenge regarding addressing disagreement: that the public’s interests and goals may often be in conflict. This concern has been brought up, among others, in connection to following democratic values in policy-relevant social science.34
To further specify the condition that solutions to disagreements adequately respond to the needs and interests of the public, let us represent the various, interlinked interests, needs and goals of members of the public as a network. Nodes of various degrees of strength bring together overlapping needs and values. Within this metaphor, specific science or policy decisions relate differently to various nodes: when they take into account a particular cluster of needs, interests and values, certain nodes are strengthened. In cases of conflicting interests and values this also means that the same decisions will weaken other nodes. To clarify this through the example of vaccination, deciding to expose the members of the public less susceptible to COVID-19 infection to vaccine side effects would weaken the side of the network comprising their interests and needs. Still, if the vaccine stops transmission, this decision may strengthen the part of the network comprising the interests of the more vulnerable groups. Thus, the disagreement can be spelled out as choosing which nodes to strengthen or weaken. Another feature of the network is that all the nodes are needed to keep it together: this means that one cannot weaken particular nodes indefinitely without expecting disruptions within the entire network. Again, this is fairly straightforward in the case of vaccination, where alienating particular groups may threaten the efficiency of the vaccination program, given that its success requires the collaboration of a (typically high) percentage of the population.
There are two points about a fair resolution of disagreement to be drawn from here. The first is that a fair resolution of disagreement does not require finding one right solution cutting across divergences in value commitments from the part of the public. Some situations may involve difficult choices – such as weighing the interests of the members of the population less affected by COVID-19 (e.g., children) against those of groups more severely affected by the disease. Secondly, making decisions that do not align with the interests of specific groups should be balanced by taking these interests into account when making future decisions. This requires transparency about value choices, as well as commitment to value choices likely to keep the system together in the longer term, i.e., not alienating particular parts of the public. Relevant actions in this sense can include remedying the negative effects experienced by groups whose interests were de-prioritized whenever possible or prioritizing their interests in future choices involving value conflicts. To use COVID-19 policies as examples, while the risk of experiencing vaccine side effects cannot be remedied post hoc, losses due to school closures can be remedied by, e.g., increasing access to education. Although my focus is on how scientists should deal with disagreement here, work on public engagement and democratic deliberation can further help spell these points out. For instance, in cases where it is impossible to act in accordance with everyone’s interests, solutions should aim for something along the lines of a sufficiently endorsed compromise.35
To sum up, a fair resolution of disagreement need not ensure that a solution that aligns with everyone’s interests will be found, although this is preferable when possible. The point is, rather, that the choice aligns with sufficient interests of members of the public and there is a clear statement of which values and interests were prioritized and why, and a commitment to making future decisions that will better represent the values and interests that were set aside earlier. This also brings further light upon the link between scientific institutions and political or social ones. In some cases managing disagreement can be done within the scientific community or by appeal to widely shared public goods. Yet, in other cases the decision requires further engagement with relevant groups, negotiation and commitments regarding future choices to ensure no interests or needs are overlooked in the longer term. Thus, the link between science and politics can have varying degrees of strength. This further helps answer concerns about overpoliticizing science when arguing for alignment with the values of the public:36 while this may be inevitable in difficult cases (such as balancing the interests of different age groups), there are also cases when it is not necessarily so.
One remaining concern here is the connection between being trustworthy and being trusted. As specified earlier, I take a fair resolution of disagreements under uncertainty to be one of the features that make the scientific community trustworthy. Yet being trustworthy is not the same as being trusted by the public. Looking at being trusted, there are further issues to investigate particularly the communication between the scientists and the public and the place of science within other institutions and social structures. Looking at the former, namely how scientific decisions are communicated to the public, requires considering a broader objection that also goes against Giubilini et al.’s case for transparency. Writing in connection to the role of values in scientific decision-making, John points out that openness and transparency may actually undermine public trust in science if the public thinks of science as value-free.37 The set of views that the public may hold about science can be referred to more broadly as “folk philosophy of science.” If folk philosophy of science assumes not only value-freedom, but also depictions of science where there are no disagreements between scientists and no cases of uncertainty, then transparency about disagreement may decrease public trust in science. A side note here is that the empirical evidence regarding such a folk philosophy of science is mixed.38 Still, even assuming a folk philosophy of science along the lines suggested by John, there is a case to be made for changing it. This is because the views that emphasize social context and other non-epistemic aspects that shape scientific practice are not the only ones that can be used to undermine public trust. Views emphasizing value-freedom, consensus, or certainty can also be instrumentalized for these purposes. For instance, groups promoting vaccine hesitancy encourage people to “do their own research,” often overstating the uncertainty in estimating side effects, which further presupposes a degree of certainty unachievable through the methods currently in use. Wilholt also points out that under such misguided public understanding of science, “the indicator properties brought into use by the public do not match the properties that in fact make science trustworthy.”39 Thus, addressing these issues requires changing the public understanding of science to open the way for uncertainty and disagreement. While a broader discussion of how a better folk philosophy of science may look like is beyond my purposes here, the discussion thus far has sketched out one important feature: grounding trust not in certainty and unanimity, but acknowledging uncertainty and disagreement and ensuring they are addressed in a fair manner.
The other, broader, concern about being trusted by the public and not only being trustworthy is how science relates to broader social structures and institutions. Once again, scientists do not have control over all of the relevant aspects, and important issues here concern the funding structure for scientific research, incentives for scientists, as well as the translation of scientific outputs into health policies and their workings within healthcare systems. While scientists can engage in advocacy regarding, say, the need to research vaccines for neglected diseases or the need to make them available to particular vulnerable populations, wider social changes are needed to affect public trust when it comes to the link between science and other institutions. To sum up, while I have highlighted that, among others, trustworthiness requires a fair resolution of disagreement under conditions of uncertainty, being trusted by the public has wider requirements, including institutions such as those involved in science education, research funding, and healthcare provision.
In this paper I have argued that trustworthiness in the case of vaccine research requires not only transparency about disagreement in cases of uncertainty, but also a fair resolution of disagreement. For this, I have brought forward two conditions to be met by institutions and not only individual scientists: considering input from all relevant members of the scientific community and seeking solutions that adequately respond to the public’s needs and interests. Drawing on the science and values literature, I have shown how this proposal works and how it can answer the main counterarguments. Nevertheless, meeting the first condition in particular involves complex dynamics of rightfully excluding some actors while making sure that other actors are not silenced, which require the consideration of broader power imbalances in society beyond those present within scientific institutions. Despite these difficulties, my focus on scientific institutions here can help find ways of enhancing trustworthiness without larger scale patterns of societal change which would be lengthier, though the two are not exclusive. On the proposed view, trustworthiness proves to be a thicker notion than that involving honesty, but this also helps explain connections to broader political questions that have recently gained prominence in the philosophy of science and which can be explored further in public health context and beyond.
Goldenberg (2021). ↑
Ivani and Dutilh Novaes (2022). ↑
Lazarus et al. (2021), Krastev et al. (2023). ↑
Lalumera (2018). ↑
Giubilini et al. (2025): 14. ↑
See Nicholl et al. (2007). Schüklenk (2025) also has an example of scientists working in biochemical weapons manufacturing, although the purpose of this example is to challenge the external legitimization requirement. ↑
E.g., Wilholt (2013); Irzik and Kurtulmus (2019); Bueter (2021). ↑
Curlin and Tollefsen (2021). ↑
Giubilini et al. (2025): pp. 8-9. ↑
Also see Popa (2024): sect. 3. ↑
For an argument in this sense, see Popa (2024). ↑
Also see Contessa (2023) for a social approach to trust in science. ↑
See Hawley (2017) for the former and Bennett (2024) for the latter. ↑
Contessa (2023). ↑
The additional conditions for being trusted, though, should be taken into account when discussing this in policy or more practical contexts. ↑
Scheman (2001); Krishnamurthy (2015), Grasswick (2017). ↑
Lohse and Canali (2021). ↑
Longino (1990, 2002). ↑
Koskinen (2017). ↑
See Pamuk (2021): ch. 4. ↑
Andersen and Wagenknecht (2013). ↑
Rolin (2015): 173. ↑
Also see Longino (1990, 2002). ↑
Also see de Melo-Martín and Intemann (2018). ↑
Oreskes and Conway (2011). ↑
Zabdyr-Jamróz (2020); Zabdyr-Jamróz and Popa (forthcoming). ↑
Cf. Baghramian and Caprioglio Panizza (2022) on skepticism. ↑
E.g., Pottle (2024). ↑
Also see de Melo-Martín and Intemann (2018). ↑
Douglas (2000). ↑
Irzik and Kurtulmus (2019). ↑
Schroeder (2021). ↑
Popa (2024): sect. 3. ↑
Thoma (2024). ↑
Cf. Valkenburg (2020): 349. ↑
E.g., Schroeder (2021); Thoma (2024). ↑
John (2018). ↑
Elliott et al. (2017); Hicks and Lobato (2022). ↑
Wilholt (2023): 201. ↑
Acknowledgments: I would like to thank Tomasz Żuradzki and the anonymous referees for feedback on earlier versions of this article that have helped improve it.
Funding: This research is part of project No. 2021/43/P/HS1/02997, co-funded by the National Science Centre and the European Union Framework Programme for Research and Innovation Horizon 2020 under the Marie Skłodowska-Curie grant agreement no. 945339.
Conflict of Interests: None declared.
License: This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited
Andersen, H., Wagenknecht, S. (2013), “Epistemic dependence in interdisciplinary groups,” Synthese 190, 1881-1898.
Baghramian, M., Caprioglio Panizza, S. (2022), “Scepticism and the value of distrust,” Inquiry, 1-28.
Bennett, M. (2024), “Trusting groups,” Philosophical Psychology 37(1): 196-215.
Bueter, A. (2021), “Public epistemic trustworthiness and the integration of patients in psychiatric classification,” Synthese, 198: 4711-4729.
Contessa, G. (2023), “It takes a village to trust science: towards a (thoroughly) social approach to public trust in science,” Erkenntnis, 88(7): 2941-2966.
Curlin, F., Tollefsen, C. (2021), The Way of Medicine: Ethics and the Healing Profession, University of Notre Dame Press, Notre Dame.
Douglas, H. (2000), “Inductive risk and values in science,” Philosophy of Science, 67(4): 559-579.
Elliott, K. C., McCright, A. M., Allen, S., Dietz, T. (2017), “Values in environmental research: Citizens’ views of scientists who acknowledge values,” PloS One, 12(10), e0186049.
Fried, C. (1974), Medical Experimentation: Personal Integrity and Social Policy, North Holland Publishing, Amsterdam.
Giubilini, A., Gur-Arie, R., Jamrozik, E. (2025), “Expertise, Disagreement, and Trust in Vaccine Science and Policy: The Importance of Transparency in a World of Experts,” Diametros 22 (82): 7-27.
Goldenberg, M. J. (2021), Vaccine hesitancy: public trust, expertise, and the war on science, University of Pittsburgh Press, Pittsburgh.
Grasswick, H. (2017), “Epistemic injustice in science,” [in:] The Routledge handbook of epistemic injustice, I. A. Kidd et al. (eds.), Routledge, New York: 313-323.
Hawley, K. (2017), “Trustworthy groups and organizations,” [in:] The philosophy of trust, Oxford University Press, Oxford: 230-250.
Hicks, D. J., Lobato, E. J. C. (2022), “Values disclosures and trust in science: A replication study,” Frontiers in Communication, 7, 1017362.
Intemann, K., de Melo-Martín, I. (2014), “Addressing problems in profit-driven research: How can feminist conceptions of objectivity help?,” European Journal for Philosophy of Science, 4: 135-151.
Irzik, G., Kurtulmus, F. (2019), “What Is Epistemic Public Trust in Science?,” The British Journal for the Philosophy of Science 70 (4): 1145-66.
Ivani, S., Dutilh Novaes, C. (2022), “Public engagement and argumentation in science,” European Journal for Philosophy of Science, 12(3): 54.
John, S. (2018), “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty,” Social Epistemology, 32(2): 75-87.
Koskinen, I. (2017), “Where is the epistemic community? On democratisation of science and social accounts of objectivity,” Synthese, 194: 4671-4686.
Krastev, S., Krajden, O., Vang, Z.M., Juárez, F.P.G., Solomonova, E., Goldenberg, M.J., Weinstock, D., Smith, M.J., Dervis, E., Pilat, D., Gold, I. (2023), “Institutional trust is a distinct construct related to vaccine hesitancy and refusal,” BMC Public Health 23, 2481.
Krishnamurthy, M. (2015), “(White) Tyranny and the democratic value of distrust,” The Monist, 98(4): 391-406.
Lalumera, E. (2018), “Trust in health care and vaccine hesitancy,” Rivista di estetica (68): 105-122.
Lazarus, J.V., Ratzan, S.C., Palayew, A., Gostin, L.O., Larson, H.J., Rabin, K., Kimball, S., El-Mohandes, A. (2021), “A global survey of potential acceptance of a COVID-19 vaccine,” Nat Med. 27: 225-8.
Lohse, S., Canali, S. (2021), “Follow* the* science? On the marginal role of the social sciences in the COVID-19 pandemic,” European Journal for Philosophy of Science, 11(4): 99.
London A.J. (2020), “Equipoise: Integrating Social Value and Equal Respect in Research with Humans,” The Oxford Handbook of Research Ethics, Oxford University Press, Oxford: 1-24.
Longino, H.E. (1990), Science as Social Knowledge: Values and Objectivity in Scientific Inquiry, Princeton University Press, Princeton and Oxford.
Longino, H. E. (2002), The fate of knowledge, Princeton University Press, Princeton and Oxford.
de Melo-Martín I., Intemann, K. (2018), The Fight against Doubt: How to Bridge the Gap between Scientists and the Public, Oxford University Press, Oxford.
Nicholl, D. J., Jenkins, T., Miles, S. H., Hopkins, W., Siddiqui, A., Boulton, F. (2007), “Biko to Guantanamo: 30 years of medical involvement in torture,” The Lancet, 370(9590): 823.
Oreskes, N., Conway, E. M. (2011), Merchants of doubt: How a handful of scientists obscured the truth on issues from tobacco smoke to global warming, Bloomsbury Publishing USA, New York.
Pamuk, Z. (2021), Politics and expertise: How to use science in a democratic society, Princeton University Press, Princeton and Oxford.
Popa, E. (2024), “Values in public health: an argument from trust,” Synthese, 203(6): 200.
Pottle, J. (2024), “Democratic Equality Beyond Deliberation,” American Political Science Review: 1-12.
Rolin, K. (2015), Values in science: The case of scientific collaboration, Philosophy of Science, 82(2): 155-177.
Scheman, N. (2001), “Epistemology resuscitated: Objectivity as trustworthiness,” [in:] Engendering rationalities, N. Tuana and S. Morgen (eds.), Suny Press, New York: 23-52.
Schüklenk U. (2025), “Expertise and Expert Authority,” Diametros 22 (82): 102-105.
Schroeder, S.A. (2021), “Democratic Values: A Better Foundation for Public Trust in Science,” The British Journal for the Philosophy of Science 72 (2): 545-62.
Valkenburg, G. (2020), “Consensus or contestation: Reflections on governance of innovation in a context of heterogeneous knowledges.” Science, Technology and Society, 25(2): 341-356.
Wilholt, T. (2013), “Epistemic trust in science,” The British Journal for the Philosophy of Science 64 (2): 233-53.
Wilholt, T. (2023), “Harmful Research and the Paradox of Credibility,” International Studies in the Philosophy of Science 36 (3): 193-209.
Thoma, J. (2024), “Social Science, Policy and Democracy,” Philosophy & Public Affairs, 52(1): 5-41.
Zabdyr-Jamróz, M. (2020), Wszechstronniczość. O Deliberacji w Polityce Zdrowotnej z Uwzględnieniem Emocji, Interesów Własnych i Wiedzy Eksperckiej, Wydawnictwo Uniwersytetu Jagiellońskiego, Kraków.
Zabdyr-Jamróz, M., & Popa, E. (forthcoming), “Three Inputs of Deliberation: Expertise, Self-Interest, and Emotions,” Politeja.