Trusting Experts?

Main Article Content

Ezio Di Nucci

Abstract

This paper argues against the claim that “expertise requires trust.” It does so by distinguishing between two versions of this claim, one according to which expertise requires trustworthiness and one according to which expertise requires actual trust. The paper then argues that the former version of the claim is obvious and therefore philosophically uninteresting, while the latter version of the claim is, actually, false. Five cases are deployed to defend these arguments, so that along the way we find out more about the nature of expertise, trust and also, incidentally, corruption and even ChatGPT. 

Article Details

How to Cite
Section
Articles

How to Cite

Share |

References

Baum K., Mantel S., Schmidt E., Speith T. (2022), “From responsibility to reason-giving explainable artificial intelligence,” Philosophy & Technology 35 (1): 12.

Di Nucci E. (2019), “Should we be afraid of medical AI?,” Journal of Medical Ethics 45 (8): 556–558.

Di Nucci E. (2020), The Control Paradox: From AI to Populism, Rowman & Littlefield, Lanham.

Di Nucci E. (2024), “Too much control? Health, sex and war,” [in:] F. Santoni de Sio, G. Mecacci (eds.), Research Handbook on Meaningful Human Control of Artificial Intelligence Systems, Edward Elgar Publishing, Cheltenham: 254–263.

Di Nucci E. (submitted), “AI-supported Clinical Decision-Making,” SSRN, URL = https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4851973 [Accessed 04.02.2025].

Di Nucci E., Jensen R. T., Tupasela A. (2020), “Kunstig intelligens og medicinsk etik: Tilfældet Watson for Oncology,” [in:] R. T. Jensen, S. Andersen (eds.), 8 Cases i Medicinsk Etik, Munksgaard, Copenhagen: 169–192.

Giubilini, A., Gur-Arie, R., Jamrozik, E. (2025), “Expertise, Disagreement, and Trust in Vaccine Science and Policy: The Importance of Transparency in a World of Experts,” Diametros 22 (82): 7-27.

Grote T., Di Nucci E. (2020), “Algorithmic decision-making and the problem of control,” [in:] M. Christen, B. Gordijn, M. Loi (eds), Technology, Anthropology, and Dimensions of Responsibility, Springer, Cham: 97–113.

Maclure J. (2021), “AI, explainability and public reason: The argument from the limitations of the human mind,” Minds and Machines 31 (3): 421–438.

Mittelstadt B., Russell C., Wachter S. (2019), “Explaining explanations in AI,” [in:] Proceedings of the Conference on Fairness, Accountability, and Transparency, ACM, New York: 279–288.

Páez A. (2019), “The pragmatic turn in explainable artificial intelligence (XAI),” Minds and Machines 29 (3): 441–459.

Tupasela A., Di Nucci E. (2020), “Concordance as evidence in the Watson for Oncology decision-support system,” AI & Society 35: 811–818.

Zednik C. (2021), “Solving the black box problem: A normative framework for explainable artificial intelligence,” Philosophy & Technology 34 (2): 265–288.