*Department of Educational Studies, University of London
https://orcid.org/0000-0002-9786-1602
email: a.traianou@gold.ac.uk
*Faculty of Education and Language Studies, The Open University
https://orcid.org/0000-0001-6842-6276
email: martyn.hammersley@open.ac.uk
This paper examines the concept of vulnerability in the context of social research ethics. An ambiguity is noted in use of this term: it may refer to an incapacity to provide informed consent to participate in a research project, or it may imply heightened susceptibility to the risk of harm. It is pointed out that vulnerability is a matter of degree, and that there are different sources and types of harm, which must be taken into account in any judgment about whether additional precautions are required to protect particular categories of research participants. Furthermore, such judgments must be sensitive to the particular context in which research is taking place. This is one of several considerations that raise questions about the desirability of the sort of pre-emptive ethical regulation that has become institutionalized in many countries over the past few decades, a form that is more appropriate to medical rather than to social research. However, this is not to deny that a concern with the vulnerability of research participants is necessary on the part of social researchers. Furthermore, it must be recognized that researchers themselves may be vulnerable to harm in the research process. Finally, some discussion is provided of the way in which a concern with vulnerability can conflict with other considerations that researchers need to take into account in doing their work. The key point is that vulnerability is a complex and controversial concept, and it requires careful handling in thinking about social research ethics.
Keywords: social research ethics, vulnerability, harm, informed consent, research ethics committees, ethical regulation
It is common in research ethics codes, and in the procedures employed by Institutional Review Boards (IRBs) or Research Ethics Committees (RECs), now operating in universities and other institutions in many countries, for certain categories of research participants to be defined as vulnerable, with heightened protections placed upon research projects involving them.1 Typical categories of the vulnerable listed include: young children, people with learning difficulties, sex workers, prisoners, people in poverty, and those suffering from serious illnesses. However, there is scope for disagreement about who should, and should not, be included in this list. For instance, some commentators challenge treating children as vulnerable because this implies that they lack competence or are powerless.2 Other commentators have pointed out that groups not normally considered as such may nevertheless be vulnerable, including even members of political elites.3 And at least one writer has challenged the very appropriateness of the concept of vulnerability in the context of research ethics.4 Even writers of standard texts in the field often indicate their concerns about this concept in the way that they write about it: for example, Wiles5 refers to “Research with children and other (so called) ‘vulnerable groups’ who are viewed as lacking the ‘capacity’ (or ‘competence’) to give consent […].”
In this paper we examine the concept of vulnerability, primarily in the context of social research, with a view to clarifying some of the issues raised in the literature.6 As will become clear, we believe that this concept does have an important role to play in research ethics, but that there are many complexities surrounding it, and that its ramifications are considerably wider than often recognized.
In the broadest terms, vulnerability can be interpreted as an inability, for whatever reason, to protect one’s own interests.7 In the context of ethical regulation, the focus has been primarily on those who may not be able to provide informed consent. Van den Hoonaard8 writes that “The concept of vulnerability has been the keystone test in medical research when researchers had to know whether a research subject had the capacity to understand and give consent to being researched.” And, as he goes on to note, in social research, too, “vulnerable” is often taken to mean “potentially incapable of providing genuine informed consent,” whether through an inability to understand the information provided or to exercise autonomous decision-making. However, in ordinary usage, the usual meaning of the word “vulnerable” is rather different from this focus on informed consent. A common definition is: “susceptible to attack or injury, physical or non-physical.”9 Given this, of the various ethics principles listed in codes and texts on research ethics, minimizing harm is the most relevant to this second definition of vulnerability, though it would also need to include the protection of privacy and prevention of exploitation by researchers or others. While they overlap, these two interpretations of the term are not isomorphic: it is the case that those deemed incapable of providing informed consent may be more susceptible to harm than others, but the converse is not necessarily true. In this paper we will focus primarily on the second interpretation of vulnerability as susceptibility to the threat of harm.
It is important to emphasize that vulnerability, in both the senses we have discussed, is a matter of degree.10 Thus, it has often been pointed out that the notion of fully informed, entirely free consent is a mirage.11 Indeed, in practice, participants may often gain relatively little understanding of what will be involved in the research process even when they sign a consent form. This can be for a variety of reasons (insufficient background knowledge; unwillingness to spend the time and effort to become informed; complicated forms, especially in medicine); and they may not be entirely free to consent or refuse consent, for instance as a result of kin group, peer group, or organisational constraints.12 Similarly, the likelihood and severity of harm are also matters of degree.13 We are all vulnerable to some threat of harm, this arising in part from the fact that we are “rational dependent animals.”14
So the key question is not who is and who is not vulnerable, but rather what are degrees of vulnerability.15 Where vulnerability is taken to indicate the need for extra protection, some threshold must be assumed above which this is required. Thus, judgments have to be made about how vulnerable the participants in a study are. And in this it needs to be remembered that individual members of any category deemed vulnerable will vary in their level of vulnerability, so that these decisions have to take account of all the relevant characteristics and social relations of the particular people involved.16 Part of what is involved here is intersectionality: people are members of multiple categories, and judgments about their vulnerability may differ depending upon which category is prioritized.
While IRBs and RECs typically treat some categories of people as significantly more vulnerable than others, the sort of pre-emptive ethical regulation that has come to be established in many Western countries, and elsewhere, effectively assumes that all research participants are sufficiently vulnerable that they must be protected by the screening of research proposals, any of these deemed unethical being modified or blocked. Furthermore, this implies incapacity on the part of researchers to act ethically, unaided, or at least to be relied upon to do this.17 The alleged source of this incapacity is that they have a vested interest in pursuing research since it brings them career advantages. As van den Hoonaard18 points out, it appears that ethics committees frequently regard researchers as “powerful, potentially uncontrolled and dangerous,”19 as well as assuming that research participants are not capable of protecting their own interests. It is on such oversimplified assumptions that damaging policies are frequently based.
This pre-emptive procedure initially arose in medical research, in response to abuses which caused or seriously threatened physical harm to patients. The regulatory system that resulted, initially in the United States and later spreading to some other countries, was certainly justified, though its subsequent development has created serious problems even in the medical field.20 The extension of this system to social research is much more open to question, given that the character of investigations there is very different.21 While a number of ethical controversies have arisen in this field, these have generally presented considerably lower level of potential harm than is involved in much medical research.22 After all, surgical or pharmacological treatments that can pose quite severe risks to physical health are absent from most social research.23 Here, often, no research intervention is involved, people are simply asked to fill in a questionnaire, interviewed, or observed in their normal activities. Of course, risks of harm can still arise – to reputations, economic interests, or mental health – but social researchers’ activities rarely seriously threaten participants’ wellbeing, by comparison not just with medical research but also when judged against other factors in their environment.24
Of course, this low level of risk might be judged irrelevant if the primary concern is not harm but respect for the autonomy of research participants. Perhaps this is why “vulnerability” is frequently treated as an incapacity to provide sufficiently informed, and sufficiently free, consent. But there is a danger here not just of informed consent being used as a means of passing on the responsibility for minimizing harm to participants themselves, but also of those judged incapable of informed consent being excluded from research participation, and their perspectives remaining unrepresented as a result. Furthermore, as we have noted elsewhere, the frequently recommended strategy for overcoming an incapacity to provide informed consent involves a contradiction: allowing others to provide proxy informed consent does not respect a person’s autonomy.25
Equally, though, the driver behind the pre-emptive regulatory system may be neither a commitment to protecting participants from harm nor ensuring that their autonomy is respected. Critics have suggested that a primary concern – on the part of funding bodies, universities, and other institutions26 – is with the risk of litigation, financial penalty, and/or damage to public reputation, should problems arise in research they have sponsored. And this is a concern that is likely to be reinforced by the companies providing them with insurance against these threats. Here “vulnerability” takes on a new meaning, both in terms of who is being treated as vulnerable, and what they are being treated as vulnerable to.27
We have suggested that, generally speaking, the likelihood of severe harm to participants in social research is low, when judged against much medical research and the background sources of potential harm that operate in people’s lives. We have also indicated that we believe, with van den Hoonaard, that IRBs/RECs tend to exaggerate the vulnerability to harm of social research participants, and we have suggested an explanation for this. However, this does not mean that social researchers can ignore potential vulnerabilities to harm on the part of the people they study. The key point, instead, is that any general labelling of people belonging to a particular category as vulnerable can be no more than indicative, suggesting the need for awareness that there may be distinctive threats of harm that need attention.28 This label should be a starting point for ethical reflection, not immediate grounds for imposing extra safeguards. Moreover, judgments about this will need to take account of the particular character of the situations and of the participants involved: in short, these judgments must be contextually sensitive.
While it is often treated as an abstraction, “vulnerability” is a relational term: to say that people are vulnerable is to imply that they are susceptible to particular kinds of threat, whether these are specified or not. And, if we examine the social categories typically listed under the heading of “the vulnerable,” it becomes obvious that they relate to different sources and types of potential harm, ranging through physical injury, psychological damage, material loss, and tarnished reputation, as well as obstruction to ongoing activities.29 For instance, babies are vulnerable to physical dangers and emotional distress, but probably not to embarrassment or damage to reputation. Similarly, the vulnerabilities of adults suffering from terminal illnesses are likely to be different from those in poverty (though, of course, some people who are terminally ill also live in poverty). Indeed, once we start thinking in these terms, it becomes clear that people outside of the categories routinely labelled vulnerable may be susceptible to specific threats of harm that require attention from researchers. To take an example we mentioned earlier, members of political elites can be more susceptible than other people to damage to their public reputations if they are identifiable by readers of research reports, and they may be more easily recognizable than others. At the same time, these people are probably much less vulnerable to exploitation by the researcher – indeed, they often take control of the research relationship.30 One implication of this is that vulnerability to particular harms must be monitored by researchers in relation to all participants, not just those belonging to “vulnerable” categories.
However, this opens up another question: for what threats of harm are researchers to be held responsible? It should not be assumed that researchers have an all-encompassing responsibility to keep participants safe from harm. We have already suggested that there is a background threshold beyond which the small likelihood and/or low likely severity of a harm means that it can be ignored by the researcher. Equally important, there are some kinds of vulnerability that, arguably, researchers can legitimately treat as outside of their responsibility to control. This is recognized in some ethics codes, and by IRBs/RECs, when they insist that researchers have a responsibility to report crimes or abuse that they discover in the field. Such reports will, after all, cause harm to those whose actions are being reported. Here, vulnerability to arrest and criminal prosecution or to other kinds of penalty, on the part of lawbreakers or abusers, is excluded from the responsibility of researchers.31 There are some other types of harm that are often treated as beyond the responsibility of researchers as well, for example the impact on key decision-makers when discrepancies are documented between stated policies and what actually happens on the ground. What this makes clear is that the limits to researchers’ responsibility must also be given attention.
Of some relevance here is the distinction between threats that are internal and those that are external to the research process. As we have noted, discussions of vulnerability in the context of social research recognize the possibility that the researcher will witness or hear about abuse by other actors in the setting, or beyond the setting, being investigated. This may be by parents or siblings in a family, carers within an organization, or even by research participants who are themselves designated as vulnerable.32 Researchers clearly do not have direct responsibility for this harm, but do they have an obligation to intervene to curtail or prevent it? The normal responsibilities of a citizen or person are involved here, rather than ones arising specifically from being a researcher. We might even ask whether being a researcher can involve suspending such normal responsibilities to some degree? By contrast, when researchers have latent professional identities that carry additional obligations, they may need to intervene, for example by reporting what has happened. Even here, though, judgments must be made about the seriousness of the abuse involved, about the extent of the researcher’s responsibility to prevent or report it, about what lines of action could be taken, and about what are likely to be the consequences of these, for the vulnerable people involved and for others, as well as for the research. The last of these considerations should not be underplayed, and the likely value of the research must be judged, not in terms of its benefits for the career of the researcher, but rather for its contribution to collective knowledge: such knowledge is a public good. Difficult decisions are involved in dealing with external sources of harm to participants, then, about which there can be reasonable disagreement.
In the case of crimes committed by research participants (which need not be victimless), researchers may feel a responsibility to protect participants by not reporting them.33 This is likely to stem not so much from a universalistic commitment to minimizing harm or respecting autonomy, but rather from what they (and research participants) regard as obligations coming out of the relationships built up as part of the research process. This highlights the fact that the range of considerations that researchers need to take into account in making practical research decisions is much wider than the set of ethical principles usually included in ethics codes and texts on social research ethics. Like all of us, they must recognize particularistic obligations, arising from relations of trust, friendship, and so on.34 A further complication is that a researcher usually has closer relations with some people in the field than with others, and, given that there could be conflict between individuals or groups, careful reflection, and perhaps negotiation, may be required about what obligations have been incurred.35
Another caution that needs to be sounded is that social researchers are never in total control of the situations in which they operate; nor are they all-powerful in relation to participants. This can be true, for instance, when they are carrying out interviews with informants who come from high status social groups or powerful elites.36 Similarly, researchers have very limited control when doing ethnographic research in settings that are the domains of others. Given that “ought implies can,” there may be harms that arise for participants, whether directly from the research or from other sources, that researchers simply cannot control. Obviously, some initial assessment must be made of the risks, to determine whether the investigation is justified, but uncertain judgments are necessarily involved, and the best decisions that can be made rely on detailed knowledge of the particular people and situations being studied, along with assessments of the value of the specific research project.
Following on from this, we should note that in much discussion of social research ethics it tends to be assumed that researchers have relatively high social status and power, while participants are vulnerable because they are relatively low status and powerless. While this model matches some research, it is at odds with a significant portion of it.37 The researcher may be a postgraduate student or a junior member of staff on a temporary contract, they may belong to a minority or oppressed group, while the participants could be relatively high-ranking members of an established profession, or managers in a large organization.38 And the latter may seek to use their power to serve their own interests, for instance asking the researcher for confidential information as a quid pro quo for facilitating access to data.39 Indeed, we should note that, to a considerable extent, researchers are always dependent on the cooperation of research participants to get access to data: gatekeepers may block entry to sites where observation could take place, or to key informants; people in the field could refuse consent to be observed or interviewed, or may actively obstruct the research. As Kim40 notes, researchers do not usually have the power to force gatekeepers and participants to cooperate with their research, even if this were legitimate; and this is true even in the case of the relatively powerless in society, including those deemed “vulnerable.”
The fact that researchers are not all-powerful also indicates that they, too, can be vulnerable. Some attention has been given to this in the literature.41 Most of what we have said about participant vulnerability also applies to researchers: the primary issue is the threat of harm, but this is a matter of degree, and there are different types of harm. Projects vary considerably in how risky they are for researchers, and in how serious is the harm that could be involved, as well as in the types and sources of harm. Furthermore, while reasonable assessments can be made about these matters, here again perfect prediction is not possible. For example, in planning research with sex workers in Guatemala, Warden42 was aware that she would face some danger, but this did not prepare her for the “existential shock” of witnessing and fearing extreme violence. Worse than this was the fact that she struggled to adapt to what the women she was studying had to cope with all the time. She writes that “The fragility of my body and the ease with which life was destroyed in Guatemala was a grim actuality to normalize.”43 But the most serious harm she experienced was not physical attack but the trauma of leaving the field while knowing that the women whose lives she had studied remained in great danger. She felt that she had been “only a tourist to their troubles.”44 She writes that “after leaving the field I could not turn off my emotional adaptation to Guatemala,” and45 she points to “the vulnerability that accompanies empathy.”46 The effects continued after she had arrived back in Scotland, indeed in some ways they became worse:
I felt it incredibly difficult to phone my colleagues in Guatemala at the organization I worked with because of a mixture of survivor’s guilt and my own avoidance strategy for fear of reliving my connection with Guatemala that sparked involuntary feelings of panic, but mostly I was afraid to hear if someone had been murdered while I had been safe in Scotland that would worsen my guilt and send me into a shame spiral.47
Not surprisingly, she found it hard to analyze the data she had collected because this triggered the post-traumatic stress disorder that resulted from her fieldwork experience. However, most research projects do not involve this level of threat of severe harm for the researcher.
A key question about the risk of harm to researchers concerns who is to decide what is excessive risk, and on what basis. One might think that this should be down to individual researchers themselves, and we believe that this is generally true. But complexities arise in the context of research teams, where junior researchers may feel obliged to take on risky assignments against their better judgment, for fear of losing their jobs or damaging their future careers. Slightly different problems arise where the researcher is a postgraduate student, since their supervisors, and the academic departments to which they belong, may feel that they have a duty of care. Aside from this, here again, the potential legal liability of institutions can result in researchers’ willingness to put themselves in jeopardy being curbed. Whether for good or bad, this is properly a matter of judgment in particular cases.
Clearly it is important that researchers try to inform themselves about any serious risk of harm they may face in the field, and they should take whatever precautions are available against this. A distinction is sometimes drawn between “ambient” and “situational” danger.48 The former refers to variation in background level of threat across situations: as Warden49 points out, Guatemala City is one of the most violent places in the world. Sampson and Thomas50 report on the distinctive dangers associated with women doing research on board cargo ships, male-dominated environments where one is “trapped in the field” for considerable periods. But they also detail the various strategies they employed to reduce these dangers, and to deal with them when problems arose. By contrast, situational or occasional danger irrupts in a situation over and above any predictable level of threat, even when it is prompted by the presence or actions of the researcher. This is much more difficult to anticipate or prepare for.
Much criticism of the notion of vulnerability in the context of research ethics has pointed out that the measures recommended to deal with “vulnerable” research participants may contravene other values that researchers ought to respect.51 An obvious conflict arises from the fact that protecting people implies that they are not able to protect themselves: that they lack capability or competence in this respect. In other words, it may reinforce stereotypes,52 and could even actually contribute to rendering people incapable in relevant respects by depriving them of the opportunity to learn what is required for this.
Closely associated is the complaint that the concept of vulnerability implies an incapacity to exercise autonomous, or at least rational, decision-making, especially in providing informed consent. Mackenzie et al53 comment on “the danger of using discourses of vulnerability and protection to justify unwarranted paternalism and coercion of individuals and groups identified as vulnerable.” In these terms, to label someone as vulnerable may be at odds with respecting their autonomy.54 Indeed, the measures used to provide protection will often actually prevent people from exercising autonomy: for example, they may be excluded from a project on the grounds that it is too risky for them to take part;55 or it may be insisted that someone else provides informed consent on their behalf, or in addition to their own decision about whether to participate. Similarly, in the case of external threats, not only could reporting abuse lead to an increased, rather than reduced, threat of harm to the person concerned, it could also breach the commitment to protect privacy, as well as (once again) signalling a lack of respect for participants’ autonomy, or the belief that they lack resilience, or are incapable of defending themselves. While they may, of course, be vulnerable, here again we are dealing with matters of degree, about which necessarily uncertain judgments must be made, rather than all or nothing certainties.
There may also be conflict between a concern with protecting the interests of vulnerable groups and the effective pursuit of social research. One of van den Hoonaard’s56 criticisms of the preoccupation with vulnerability is precisely that it prevents research of particular kinds being done on groups designated “vulnerable,” or leads to it being done in ways that are less likely to be successful.57 This is one aspect of a more general point: that, paradoxical as it may sound, there are dangers associated with being too ethical, with giving ethical issues too great a priority.58 The risk here is that it is always possible to talk up the likelihood of harm, and/or its severity, exaggerating the dangers involved. Indeed, sometimes it seems to be insisted that there should be no risk of harm to participants involved in research, that researchers must ensure that this does not occur; or, similarly, that people’s autonomy must be fully respected. But these are unrealistic expectations, and if taken seriously they can only lead, ultimately, to the abandonment of social research.59 It is the distinctive responsibility of a researcher to pursue worthwhile knowledge, in other words that which is of general value, as effectively as possible within appropriate ethical limitations; and risks of harm, to participants or researchers themselves, must be weighed against this. Furthermore, there are often side-benefits of research for participants, from having someone they can talk to in confidence to the provision of various services.60 To repeat our key point: what is justifiable is necessarily a matter of situated, and uncertain, judgment; which is not to deny that there are better and worse decisions about this.
It should be clear from our discussion that the concept of vulnerability is complex and controversial. There have been disputes not just about whether or not particular categories of research participants should be treated as vulnerable, but even about the legitimacy of the concept itself. We noted that there is a significant fissure in its conventional meaning. In the context of the pre-emptive form of ethical regulation of social research that now prevails in many countries, it is typically taken to refer to the capacity of people to provide informed consent to participate in a research project. But there is a broader, more common-sense, meaning, relating to differential susceptibility to harm. We argued that, in both these cases, vulnerability is a matter of degree, and that the threat of harm in most social research is much lower than in medical research, where ethical regulation was initially, and justifiably, established. Furthermore, it is rarely more serious than background threats that people live with routinely in social life. Vulnerability remains an important concept and researchers must exercise wise judgments in making decisions about how to treat their participants. But it is a feature of the type of regulatory system now in force that it tends to exaggerate the prevalence of the problems with which it deals, partly as a result of the fact that one of the main drivers behind it is the concern of organizations and institutions to protect themselves from legal as well as reputational challenges or financial penalties.
We focused our discussion primarily on vulnerability as susceptibility to harm, noting that the latter is not only variable but can also take many different forms. This is illustrated by the wide range of groups that are commonly treated as “vulnerable”: they are vulnerable in different ways. We also recognized that researchers themselves could be vulnerable to harm while in the field, and beyond.61 And we highlighted how a concern with protecting people from harm can be in conflict with respecting their privacy or autonomy. We insisted that minimizing serious harm should be the primary ethical concern of social researchers, but that there are limits to their responsibilities even in this respect. Furthermore, there are other sorts of consideration that researchers must take into account in making practical decisions about how to pursue their inquiries, including how research can be pursued most effectively. We underlined the fact that the knowledge produced by research is a public good.
We believe the sort of pre-emptive ethical regulation that is currently in operation in many countries is not fit for purpose in the case of social research, and indeed that it can have damaging consequences.62 In relation to vulnerability, decisions must be sensitive to the particular people and circumstances involved; they cannot be determined by abstract principles or procedures, important though these may be as guides. Indeed, blanket labelling of particular categories of participants as vulnerable undermines good practice in the field: it discourages proper assessments of degrees of vulnerability, as well as of how threats of harm or to privacy should be weighed against respect for competence and autonomy, and against researchers’ duty to pursue their work effectively.
See, for instance: Economic and Social Research Council (2023). Van den Hoonaard (2018) reviews what a wide range of ethics regulatory bodies, in several countries, say about vulnerable groups and how they should be treated by researchers. See also Bracken-Roche et al. (2017). A very large number of social categories have been listed as vulnerable at one time or another, see Sieber (1992): 93. For a brief history of the origins of the concern with vulnerability and subsequent interpretations of the term, see Levine et al. (2004). Liamputtong (2007) offers guidance for research with “the vulnerable.” ↑
Morrow and Richards (1999); Farrell (2005); Wright (2015). ↑
Traianou (2023). ↑
Van den Hoonaard (2018, 2020); see also Levine et al. (2004). ↑
Wiles (2003): 31. ↑
For attempts at clarification in the field of bioethics, see DeBruin (2004); Schroeder and Gefenas (2009); Rogers et al. (2012); Lange et al. (2013); Wendler (2017); Boldt (2019); Gordon (2020). ↑
Feinberg (1984). ↑
Van den Hoonaard (2018): 305. ↑
See entries in the ‘Oxford English Dictionary.’ ↑
Gordon (2020): 35. ↑
Wiles (2003): chapter 3. ↑
Hammersley and Traianou (2012): 82–98. ↑
Ibidem: 62–64. ↑
MacIntyre (1999). The relationship between vulnerability and dependence has been a matter for particular discussion within feminist philosophy: see Purcell (2013); Mackenzie et al. (2013b); Mao (2019); Polychroniou (2022). Here, the importance of embodiment and emotions has been emphasised. Equally important has been an insistence on a positive sense of vulnerability, implying a responsiveness to others (Gilson 2014). See also Behar (1996) and Nortvedt (2003). ↑
An alternative way of thinking about this is in terms of “layers of vulnerability”: see Luna (2009). ↑
For the case of prisoners, see Mitchelson (2017); for that of psychiatric patients, see Bracken-Roche (2016); and for that of migrants see Maillet et al. (2017). ↑
It also assumes superior ethical capacity on the part of members of IRBs and RECs: see Hammersley (2009). ↑
Van den Hoonaard (2018): 316. ↑
Juritzen et al. (2011): 641. ↑
See Whitney (2023). ↑
For discussion of, and research on, IRBs/RECs see Schrag (2010); van den Hoonaard (2011); and Stark (2012). ↑
See Hammersley and Traianou (2012): Intro. ↑
While some social research involves interventions – such as experimental studies, including randomized controlled trials, and action research – these rarely involve risks of physical harm. For an exception that led to considerable controversy, see Borofsky (2005). ↑
The key point is that both the risk and seriousness of any threats of harm must be judged against the routine level of threat, of various kinds, that participants normally experience. It is unreasonable to require that research pose no threat of any kind or of any level, since human life is not, and cannot be rendered, free of all risk. ↑
Traianou and Hammersley (2021). ↑
Rustin (2010); Dingwall (2012). ↑
Sluka (2020). ↑
See Gordon (2020). ↑
For further discussion of types of harm, see Hammersley and Traianou (2012): chapter 3. ↑
Traianou (2023). ↑
The preoccupation with researchers reporting crimes or abuse may be a further indication that what drives ethical regulation is institutional concern to avoid public criticism – such as complaints that a researcher was aware of some crime or abuse but did nothing about it – and potential legal action on this basis. We are not denying that the risk of reputational damage and/or legal prosecution can be a genuine concern on the part of funding bodies, universities, etc, but rather that it should be a secondary matter, and that frequently what seems to be involved is an effort to eliminate all possibility of institutional liability, even though public criticism, and certainly legal liability, are relatively low risks in the case of social research. ↑
In extreme cases such abuse can extend to murder: Chantler-Hicks (2023). ↑
A controversial example is provided by Alice Goffman’s (2015) research in a low-income black neighbourhood in Philadelphia: she not only did not report crimes she heard about, or witnessed, but admitted actively assisting action on the part of participants that could have resulted in murder. For the arguments of one of her critics, see Lubet (2018): chapter 8. ↑
Goodin (1985) argues that these particularistic obligations are frequently exaggerated at the expense of a broader commitment we have to protect the vulnerable, whoever they are. However, we believe these obligations are nevertheless very important, in the context of research and beyond. Exponents of a feminist ethics of care would agree – for a discussion of vulnerability from this ethical perspective, see Dodds (2013). ↑
Hammersley and Atkinson (2019): chapter 4. Another issue that may arise from working with participants who are vulnerable to harm is that the researcher may feel an obligation to provide assistance. See, for example, van Dijk (2015). ↑
Neal and McLaughlin (2009). ↑
Kim (2023). ↑
See, for instance, Ozga and Gewirtz (1994); Grek (2011); Grek (2021). ↑
See, for an example, Alcadipani and Hodgson (2009): 136. ↑
Kim (2023). ↑
Lee (1995); Nordstrom and Robben (1995); Behar (1996); Lyng (1998); Lee-Treweek and Linkogle (2000); Downey et al. (2007); Bloor et al. (2010); Luxardo et al. (2011); Chevalier (2015); Laar (2014); Sampson (2019). ↑
Warden (2013): 152; Nordstorm and Robben (1995): 13. ↑
Ibidem: 158. ↑
Ibidem: 160. ↑
With Behar (1996). ↑
Warden (2013): 152. ↑
Ibidem: 162. ↑
Lee (1995): 2. ↑
Warden (2013): 153. ↑
Sampson and Thomas (2003). ↑
Van den Hoonaard (2018). ↑
Levine et al. (2004). ↑
Mackenzie et al. (2013a): 2. ↑
See, for example, Pickering’s (2019) discussion of heroin users. She also provides an excellent account of the complexities of autonomy and consent in this context. ↑
Juritzen et al. (2011): 647. ↑
Van den Hoonaard (2018). ↑
See also Pickering (2019). ↑
Hammersley and Traianou (2012): Conclusion. ↑
Bronfenbrenner (1952): 452. ↑
See Hammersley and Atkinson (2019): 68–71; van den Hoonaard (2018): 315. ↑
See, for instance, Wallis (1977). ↑
Hammersley (2009); Dingwall (2016). ↑
Funding: There is no funding attached to this work.
Conflict of Interests: The authors declare that there is no conflict of interest to disclose.
License: This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
Alcadipani R., Hodgson, D. (2009), “By Any Means Necessary? Ethnographic Access, Ethics, and the Critical Researcher,” Tamara Journal 7 (4): 127–146.
Behar R. (1996), The Vulnerable Observer: Anthropology That Breaks Your Heart, Beacon, Boston.
Bloor N., Fincham B., Sampson H. (2010), “Unprepared for the Worst: Risks of Harm in Qualitative Research,” Methodological Innovations Online 5 (1): 45–55.
Boldt J. (2019), “The Concept of Vulnerability in Medical Ethics and Philosophy,” Philosophy, Ethics, and Humanities in Medicine 14 (1): 1–8.
Borofsky R. (2005), Yanomami: The Fierce Controversy and What We Can Learn From It, University of California Press, Berkeley.
Bracken-Roche D., Bell E., Racine E. (2016), “The ‘Vulnerability’ of Psychiatric Research Participants: Why This Research Ethics Concept Needs To Be Revisited,” The Canadian Journal of Psychiatry/La Revue Canadienne de Psychiatrie 61 (6): 335–339.
Bracken-Roche D., Bell E., Macdonald M.E., Racine E. (2017), “The Concept of ‘Vulnerability’ in Research Ethics: an In-Depth Analysis of Policies and Guidelines,” Health Research Policy and Systems 15 (1): 8.
Bronfenbrenner U. (1952), “Principles of Professional Ethics: Cornell Studies in Social Growth,” The American Psychologist 7: 452–455.
Chantler-Hicks L. (2023), “Coroner calls for regulation of supported housing after London teen stabbed to death by fellow hostel resident,” The Standard, URL = https://www.standard.co.uk/news/london/london-crime-stabbing-coroner-warning-supported-housing-regulation-teenager-hayes-b1063274.html [Accessed 15.04.2023].
Chevalier D. (2015), “‘You Are Not From Around Here, Are You?’: Getting Othered in Participant Observation,” [in:] Contributions from European Symbolic Interactionists: Reflections on Methods, Studies in Symbolic Interaction, Volume 44, T. Müller (ed.), Emerald, Bingley: 1–17.
DeBruin D. (2004), “Looking Beyond the Limitations of ‘Vulnerability’: Reforming Safeguards in Research,” American Journal of Bioethics 4 (3): 76–78.
van Dijk D. (2015), “Mission Impossible: Not Getting Emotionally Involved in Research Among Vulnerable Youth in South Africa,” [in:] Contributions from European Symbolic Interactionists: Reflections on Methods, Studies in Symbolic Interaction, Volume 44, T. Müller (ed.), Emerald, Bingley: 61–77.
Dingwall R. (2012), “How Did We Ever Get into This Mess: the Rise of Ethical Regulation in the Social Sciences,” [in:] Ethics in Social Research, K. Love (ed.), Sage, London: 3–26.
Dingwall R. (2016), “The Social Costs of Ethics Regulation,” [in:] The Ethics Rupture: Exploring Alternatives to Formal Research Ethics Review, W.C. van den Hoonaard, A. Hamilton (eds.), University of Toronto Press, Toronto: 25–42.
Dodds S. (2013), “Dependence, Care, and Vulnerability,” [in:] Vulnerability: New Essays in Ethics and Feminist Philosophy, C. Mackenzie, W. Rogers, and S. Dodds (eds.), Oxford University Press, New York: 181–204.
Downey H., Hamilton K., Catterall M. (2007), “Researching Vulnerability: What about the Researcher?,” European Journal of Marketing 41: 734–739.
Economic and Social Research Council (2023), “Research with potentially vulnerable people,” URL = https://www.ukri.org/councils/esrc/guidance-for-applicants/research-ethics-guidance/research-with-potentially-vulnerable-people/ [Accessed 07.11.2023].
Farrell A. (ed.) (2005), Ethical Research with Children, Open University Press, Maidenhead.
Feinberg J. (1984), Harm to Others, Oxford University Press, New York.
Gilson E. (2014), The Ethics of Vulnerability, Routledge, London.
Goffman A. (2015), On the Run, Picador, New York.
Goodin R.E. (1985), Protecting the Vulnerable: A Reanalysis of Our Responsibilities, University of Chicago Press, Chicago.
Gordon B.G. (2020), “Vulnerability in Research: Basic Ethical Concepts and General Approach to Review,” Ochsner Journal 20 (1): 34–38.
Grek S. (2011), “Interviewing the Education Policy Elite in Scotland: a Changing Picture?,” European Educational Research Journal 10 (2): 233–241.
Grek S. (2021), “Researching Education Elites Twenty Years On. Sex, Lies and… Video Meetings,” [in:] Intimate Accounts of Education Policy Research: The Practice of Methods, C. Addey, N. Piattoeva, J. Law (eds.), Routledge, London: 16–31.
Hammersley M. (2009), “Against the Ethicists: on the Evils of Ethical Regulation,” International Journal of Social Research Methodology 12 (3): 211–225.
Hammersley M., Traianou A. (2012), Ethics in Qualitative Research, Sage, London.
Hammersley M., Atkinson P. (2019), Ethnography: Principles in Practice, Routledge, London.
van den Hoonaard W.C. (2011), The Seduction of Ethics: Transforming the Social Sciences, Toronto University Press, Toronto.
van den Hoonaard W.C. (2018), “The Vulnerability of Vulnerability: Why Social Science Researchers Should Abandon the Doctrine of Vulnerability,” [in:] The SAGE Handbook of Qualitative Research Ethics, R. Iphofen, M. Tolich (eds.), Sage, London: 328–345.
van den Hoonaard W.C. (2020), “‘Vulnerability’ as a Concept Captive in Its Own Prison,” [in:] Handbook of Research Ethics and Scientific Integrity, R. Iphofen (ed.), Springer, Cham: 577–588.
Juritzen T.I., Grimen H., Heggen K. (2011), “Protecting Vulnerable Research Participants: A Foucault-Inspired Analysis of Ethics Committees,” Nursing Ethics 18 (5): 640–650.
Kim C.-Y. (2023), “In Control or at the Mercy of Others? The Role of Ethics Regulation and Power Dynamics in Online Data Collection with UK Secondary School Students,” unpublished paper.
Laar A. (2014), “Researcher Vulnerability: An Overlooked Issue in Vulnerability Discourses,” Scientific Research and Essays 9 (16): 737–743.
Lange M.M., Rogers W., Dodds S. (2013), “Vulnerability in Research Ethics: a Way Forward,” Bioethics 27 (6): 333–340.
Lee R.M. (1995), Dangerous Fieldwork, Sage, Thousand Oaks.
Lee R.M., Renzetti C.M. (1990), “The Problems of Researching Sensitive Topics: an Overview and Introduction,” American Behavioral Scientist 33 (5): 510–528.
Lee-Treweek G., Linkogle S. (eds.) (2000), Danger in the Field, Routledge, London.
Levine C., Faden R., Grady C. (2004), “The Limitations of ‘Vulnerability’ as a Protection for Human Research Participants,” The American Journal of Bioethics 4 (3): 44–49.
Liamputtong P. (2007), Researching the Vulnerable: A Guide to Sensitive Research Methods, Sage, London.
Lubet S. (2018), Interrogating Ethnography, Oxford University Press, New York.
Luna F. (2009), “Elucidating the Concept of Vulnerability: Layers Not Labels,” International Journal of Feminist Approaches to Bioethics 2 (1): 121–139.
Luxardo N., Colombo G., Iglesias G. (2011), “Methodological and Ethical Dilemmas Encountered During Field Research of Family Violence Experienced by Adolescent Women in Buenos Aires,” The Qualitative Report 16 (4): 984–1000.
Lyng S. (1998), “Dangerous Methods: Risk Taking and the Research Process,” [in:] Ethnography on the Edge, J. Ferrell, M.S. Hamm (eds.), Northeastern University Press, Boston MA: 221–251.
MacIntyre A. (1999), Dependent Rational Animals, Open Court, Chicago.
Mackenzie C., Rogers W., Dodds S. (2013a), “Introduction,” [in:] Vulnerability: New Essays in Ethics and Feminist Philosophy, C. Mackenzie, W. Rogers, S. Dodds (eds.), Oxford University Press, New York: 1–33.
Mackenzie C., Rogers W., Dodds S. (eds.) (2013b), Vulnerability: New Essays in Ethics and Feminist Philosophy, Oxford University Press, New York.
Maillet P., Mountz A., Williams K. (2017), “Researching Migration and Enforcement in Obscured Places: Practical, Ethical and Methodological Challenges to Fieldwork,” Social & Cultural Geography 18 (7): 927–950.
Mao X. (2019), “A Levinasian Reconstruction of the Political Significance of Vulnerability,” Religions 10 (1): 1–11.
Mitchelson M.L. (2017), “Relational Vulnerability and the Research Process with Former Prisoners in Athens, Georgia (USA),” Social & Cultural Geography 18 (7): 906–926.
Morrow V., Richards M. (1999), “The Ethics of Social Research with Children: An Overview,” Children & Society 10 (2): 90–105.
Neal S., McLaughlin E. (2009), “Researching Up? Interviews, Emotionality and Policy-making Elites,” Journal of Social Policy 38 (4): 689–707.
Nordstrom C., Robben A.C.G.M. (1995), Fieldwork Under Fire, University of California Press, Berkeley.
Nortvedt P. (2003), “Subjectivity and Vulnerability: Reflections on the Foundation of Ethical Sensibility,” Nursing Philosophy 4 (3): 222–230.
Oxford English Dictionary, URL = https://www.oed.com/ [Accessed 15.04.2023].
Ozga J., Gewirtz S. (1994), “Sex, Lies & Audiotape: Interviewing the Education Policy Elites,” [in:] Researching Education Policy: Ethical and Methodological Issues, D. Halpin, B. Troyna (eds.), Falmer Press, London: 127–142.
Pickering L. (2019), “Paternalism and the Ethics of Researching with People Who Use Drugs,” [in:] The SAGE Handbook of Qualitative Research Ethics, R. Iphofen, M. Tolich (eds.), Sage, London: 411–425.
Polychroniou A. (2022), “Towards a Radical Feminist Resignification of Vulnerability: A Critical Juxtaposition of Judith Butler’s Post-Structuralist Philosophy and Martha Fineman’s Legal Theory,” Redescriptions: Political Thought, Conceptual History and Feminist Theory 25 (2): 113–136.
Purcell E. (2013), “Narrative Ethics and Vulnerability: Kristeva and Ricoeur on Interdependence,” Journal of French and Francophone Philosophy – Revue de la philosophie française et de langue française 21 (1): 43–59.
Rogers W., Mackenzie C., Dodds S. (2012), “Why Bioethics Needs a Concept of Vulnerability,” International Journal of Feminist Approaches to Bioethics 5 (2): 11–38.
Rustin M. (2010), “The Risks of Assessing Ethical Risks,” Sociological Research Online 15 (4): 18.
Sampson H. (2019), “‘Fluid Fields’ and the Dynamics of Risk in Social Research,” Qualitative Research 19 (2): 131–147.
Sampson H., Thomas, M. (2003), “Lone Researchers at Sea: Gender, Risk, and Responsibility,” Qualitative Research 3 (2): 165–189.
Schrag Z.M. (2010), Ethical Imperialism: Institutional Review Boards and the Social Sciences, 1965–2009, Johns Hopkins University Press, Baltimore.
Schroeder D., Gefenas E. (2009), “Vulnerability: Too Vague and Too Broad?,” Cambridge Quarterly of Healthcare Ethics 18 (2): 113–121.
Sieber J.E. (1992), Planning Ethically Responsible Research: A Guide for Students and Internal Review Boards (Applied Social Research Methods Series), Sage, Thousand Oaks.
Sluka J.A. (2020), “Too Dangerous for Fieldwork? The Challenge of Institutional Risk-management in Primary Research on Conflict, Violence and ‘Terrorism’,” Contemporary Social Science 15 (2): 241–257.
Stark L. (2012), Behind Closed Doors. IRBs and the Making of Ethical Research, University of Chicago Press, Chicago.
Traianou A. (2023), “Research Ethics and the Vulnerability of Political Elites,” paper presented at the European Conference on Educational Research (ECER), University of Glasgow, Glasgow 21–25 August.
Traianou A., Hammersley M. (2021), “Is There a Right Not To Be Researched? Is There a Right To Do Research? Some Questions About Informed Consent and the Principle of Autonomy,” International Journal of Social Research Methodology 24 (4): 443–452.
Wallis R. (1977), “The Moral Career of a Research Project,” [in:] Doing Sociological Research, C. Bell, H. Newby (eds.), Allen and Unwin, London: 149–167.
Warden T. (2013), “Feet of Clay: Confronting Emotional Challenges in Ethnographic Fieldwork,” Journal of Organizational Ethnography 2 (2): 150–172.
Wendler D. (2017), “A Pragmatic Analysis of Vulnerability in Clinical Research,” Bioethics 31 (7): 515–525.
Whitney S. (2023), From Oversight to Overkill: Inside the Broken System That Blocks Medical Breakthroughs – and How We Can Fix It, Rivertowns Books, Irvington.
Wiles R. (2003), What are Research Ethics?, Bloomsbury, London.
Wright K. (2015), “Are Children Vulnerable in Research?,” Asian Bioethics Review 7 (2): 201–213.