Journal of Clinical Images and Medical Case Reports

ISSN 2766-7820
Short Commentary - Open Access, Volume 6

Deep learning for computer vision in dentistry: From black to glass box?

Paul Monsarrat1,2; Raphaël Richert3,4,5; Sylvain Cussat-Blanc1,6; Maxime Ducret4,5,7*

1Artificial and Natural Intelligence (ANITI), Toulouse Institute, France.

2RESTORE Research Center and Department of Oral Medicine, Université de Toulouse, INSERM, CNRS, EFS, ENVT, Université P Sabatier, Toulouse University Hospital (CHU), France.

3Laboratory of Contact Mechanics and Structures, France.

4Hospices Civils de Lyon, France.

5Faculty of Odontology, Lyon 1 University, France.

6Institute of Research in Informatics (IRIT), Toulouse, France.

7Laboratory of Tissue Biology and Therapeutic Engineering, France.

*Corresponding Author : Maxime Ducret
Laboratory of Tissue Biology and Therapeutic Engineering, France.
Email: [email protected]

Received : Jan 29, 2025

Accepted : Feb 19, 2025

Published : Feb 26, 2025

Archived : www.jcimcr.org

Copyright : © Ducret M (2025).

Abstract

Faced with the increasing volume and complexity of data, dentistry, like medicine, stands to benefit significantly from the transformative potential of Artificial Intelligence (AI). However, some efforts should be made to foster the culture of transparency and explainability of AI. Actual models of deep learning that use convolutional neural networks are restricted in terms of explainability capabilities. Alternatives that are easily interpretable for expert exist, whether by feature extraction combined with ML techniques and explainability, or using an optimized interpretable image processing pipelines based on selected image processing functions. This is a central need our community must satisfy to establish long-term trust and facilitate the acceptance of this technology both for dental clinicians, researchers and for patients.

Keywords: Ethics; Artificial intelligence; Computer vision; Dentistry; Deep learning.

Citation: Monsarrat P, Richert R, Cussar-Blanc S, Ducret M. Deep learning for computer vision in dentistry: From black to glass box?. J Clin Images Med Case Rep. 2025; 6(2): 3484.

Short commentary

Faced with the increasing volume and complexity of data, dentistry, like medicine, stands to benefit significantly from the transformative potential of Artificial Intelligence (AI). This technology has the capacity to enrich every stage of patient health care, including addressing chronic conditions [4]. This growing enthusiasm for AI, which we can only share, reflects the promises and hope that AI generates in the dental community, sometimes overshadowing the challenges and ethical issues associated with AI [2].

This recent enthusiasm is largely due to the impressive results in dental research reported by neural network models, primarily through Deep Learning (DL) and the use of Convolutional Neural Networks (CNN), in almost all disciplines of oral medicine, whether for image analysis, diagnosis, or therapeutic planning purposes, with image classification, object detection, and segmentation [4,5]. Intrinsically, CNN are complex neural network models, real “black boxes”, prioritizing performance over transparency and explainability. The development of explainability frameworks, such as Grad-CAM, aims to bridge this gap.

While these solutions offer some degree of control and oversight, they regrettably do not allow us to understand the determinants of decision-making in a way that is easily comprehensible to humans, let alone explainable to patients. Moreover, DL models require substantial data and computing resources, and recent examples have demonstrated the challenges of applying DL-trained models to datasets from different populations [3]. To be socially accepted and widely democratized, future solutions must be capable of maintaining the trust of both clinicians and patients. From our perspective, this will necessitate two key steps: (1) Fostering a culture of transparency and accountability around the data used for the learning and the validation in DL models, to adopt a more cautious and humble approach, and (2) Developing AI systems that intelligently integrate into the natural human decision-making process to enhance it rather than distort it.

Alternatives to conventional DL that are easily interpretable for expert exist. One approach involves extracting image descriptors with the input of domain experts, such as shape and texture descriptors (feature extraction). These descriptors can then be coupled with Machine Learning (ML) for prediction. Whereas some ML approaches are also black-boxes, combine them with explainability methods makes it possible to wisely choose a compromise between transparency and performance, which is not possible in the realm of DL. This combination provides a means to better comprehend the decision-making criteria, offering valuable insights even from a pathophysiological perspective for medical professionals (e.g. a change in the density or texture of a structure can be interpreted in terms of an underlying biomedical process). Additionally, there are other strategies based on fundamental image processing functions, which can be efficiently assembled and optimized using genetic programming (Figure 1) [1]. Both strategies offer “glass-box” alternatives for either segmentation or classification of biomedical data, especially in oral medicine. The use of DL in dentistry is not only a matter of model performance. Explainability must also be a means to help improve our knowledge of pathophysiology, which is enabled using hybrid techniques approaching semiology.

While the benefits of AI in computer vision for dentistry are undeniable, some efforts should be made to foster the culture of transparency and explainability of AI. This is a central need our community must satisfy to establish long-term trust and facilitate the acceptance of this technology both for dental clinicians, researchers and for patients.

Declarations

Competing interest: The authors have no conflicts of interest relevant to this article.

Acknowledgements: We gratefully acknowledge the national infrastructure “ECELL France: Development of mesenchymal stem cell-based therapies” (PIA-ANR-11-INBS-005).

Figure 1: Lower and higher level of transparency of dental computer vision using deep learning. Actual models of deep learning that use convolutional neural networks are restricted in terms of explainability capabilities. Alternatives that are easily interpretable for expert exist, whether by feature extraction combined with ML techniques and explainability, or using an optimized interpretable image processing pipelines based on selected image processing functions.

References

  1. Cortacero K, McKenzie B, Müller S, Khazen R, Lafouresse F, Corsaut G, et al. Evolutionary design of explainable algorithms for biomedical image segmentation. Nat Commun. 2023; 14: 7112.
  2. Mörch CM, Atsu S, Cai W, Li X, Madathil SA, Liu X, et al. Artificial Intelligence and Ethics in Dentistry: A Scoping Review. J Dent Res. 2021; 100: 1452–1460.
  3. Nguyen D, Kay F, Tan J, Yan Y, Ng YS, Iyengar P, Peshock R, Jiang S. Deep Learning–Based COVID-19 Pneumonia Classification Using Chest CT Images: Model Generalizability. Frontiers in Artificial Intelligence. 2021. https://www.frontiersin.org/article/10.3389/ frai.2021.694875.
  4. Schwendicke F, Krois J. Data Dentistry: How Data Are Changing Clinical Care and Research. J Dent Res. 2022; 101: 21–29.
  5. Schwendicke F, Samek W, Krois J. Artificial Intelligence in Dentistry: Chances and Challenges. J Dent Res. 2020; 99: 769–774.