Call for Papers

TL;DR.

The workshop focuses on the challenges and techniques for dental images analysis. It aims to facilitate collaboration among researchers, practitioners, and industry professionals to address the complexities and technical subtleties involved in such a critical domain.

Motivation and Context

Oral and maxillofacial imaging is rapidly evolving: Cone-Beam CT (CBCT), Intra-Oral Scans (IOS), panoramic/cephalometric radiographs, and high-resolution intra-oral photographs are now routinely acquired for diagnosis, orthodontic planning, implant surgery, orthognathic treatment planning, and longitudinal follow-up [2, 3, 5]. In parallel, learning-based methods for segmentation, registration, landmark detection, morphology analysis, and reporting have matured [1, 4, 7]. However, progress is often limited by small, siloed datasets, single-modality or narrowly-scoped benchmarks, and evaluation protocols that do not reflect clinical variability (multi-center, multi-vendor, multi-protocol acquisition, heterogeneous populations, and evolving workflows) [8, 9]. As a result, scientific contributions that perform well on a particular dataset may fail to translate into robust, clinically reliable systems.

Dental pipelines are inherently multi-structure and multi-modal. Clinically actionable workflows must combine 3D anatomy from CBCT (roots, canals, bone, airways) with high-fidelity surfaces from IOS (crowns/gingiva), complemented by 2D radiographs, photographs, and structured clinical notes/reports describing occlusion, periodontal findings, pathology, and treatment context [6, 7]. Robust multi-modal learning across sites and protocols remains underexplored, and trustworthy deployment aspects (calibration, uncertainty, failure modes, domain shift, and fairness) are rarely addressed end-to-end.

Scope and Objectives

Building on the first edition (ODIN 2025, MICCAI 2025), ODIN 2026 provides a focused forum for clinically grounded, multi-modal, and multi-center oral/dental AI. We move beyond algorithm-only comparisons by connecting methods, data, evaluation, and systems to real clinical needs.

Key objectives include:

  • Clinically actionable multi-modal AI: methods that jointly exploit CBCT, IOS, 2D radiographs, photographs, and/or reports for diagnosis, planning, surgical simulation, outcome prediction, and automated reporting;
  • Generalization and robustness: multi-center/multi-vendor evaluation, out-of-distribution (OOD) generalization, calibration, uncertainty estimation, and robustness under protocol and population shift;
  • Multi-structure learning at scale: segmentation/instance parsing and landmarking across the anatomical complexity of dento-maxillofacial imaging (teeth/roots, mandibular canal, craniofacial bones, airways, soft tissue, pathology-relevant findings);
  • Reproducibility and translation: standardized protocols, transparent reporting and error analysis, and attention to workflow integration and resource-aware deployment [9].

Technically, we welcome algorithmic and systems contributions spanning segmentation and parsing; cross-modality registration and fusion (e.g., IOS–CBCT, IOS–photo, radiograph–CBCT); landmark detection and morphometrics; weak/semi/self-supervision and active learning; foundation and vision-language models tailored to dental data; report modeling; and trustworthiness (OOD detection, interpretability, fairness, and efficient inference).

References:

  1. Bolelli, F., Marchesini, K., van Nistelrooij, N., Lumetti, L., Pipoli, V., Ficarra, E., … & Grana, C. (2025). Segmenting Maxillofacial Structures in CBCT Volumes. In Proceedings of the Computer Vision and Pattern Recognition Conference
  2. Cui, Z., Fang, Y., Mei, L., Zhang, B., Yu, B., Liu, J., Jiang, C., Sun, Y., Ma, L., Huang, J., et al.: A fully automatic AI system for tooth and alveolar bone segmentation from cone-beam CT images. Nature communications 13(1), 2096 (2022)
  3. Flügge, T., Derksen, W., Te Poel, J., Hassan, B., Nelson, K., Wismeijer, D.: Registration of cone beam computed tomography data and intraoral surface scans–A prerequisite for guided implant surgery with CAD/CAM drilling guides. Clinical Oral Implants Research 28(9), 1113–1118 (2017)
  4. Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods 18(2), 203–211 (2021)
  5. Jamjoom, F.Z., Kim, D.G., McGlumphy, E.A., Lee, D.J., Yilmaz, B.: Positional accuracy of a prosthetic treatment plan incorporated into a cone beam computed tomography scan using surface scan registration. The Journal of prosthetic dentistry 120(3), 367–374 (2018)
  6. Kim, S., Choi, Y., Na, J., Song, I.S., Lee, Y.S., Hwang, B.Y., Lim, H.K., Baek, S.J.: Best of Both Modalities: Fusing CBCT and Intraoral Scan Data Into a Single Tooth Image. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 553–563. Springer (2024)
  7. Liu, K., Elbatel, M., Chu, G., Shan, Z., Sum, F.H.K.M.H., Hung, K.F., Zhang, C., Li, X., Yang, Y.: FDTooth: Intraoral Photographs and CBCT Images for Fenestration and Dehiscence Detection. Scientific Data 12(1), 1007 (2025)
  8. Math, S.Y., Ameli, N., Stefani, C.M., Kung, J.Y., Punithakumar, K., Amin, M., Pacheco-Pereira, C.: Augmented intelligence in oral and maxillofacial radiology: a systematic review. Oral surgery, oral medicine, oral pathology and oral radiology (2025)
  9. Vahdati, S., Khosravi, B., Mahmoudi, E., Zhang, K., Rouzrokh, P., Faghani, S., Moassefi, M., Tahmasebi, A., Andriole, K.P., Chang, P., et al.: A guideline for open-source tools to make medical imaging data ready for artificial intelligence applications: a society of imaging informatics in medicine (SIIM) survey. Journal of Imaging Informatics in Medicine 37(5), 2015–2024 (2024)