Call for Papers

TL;DR.

The workshop focuses on the challenges and techniques for dental images analysis. It aims to facilitate collaboration among researchers, practitioners, and industry professionals to address the complexities and technical subtleties involved in such a critical domain.

Workshop Motivation and Description

Computer-aided diagnosis tools are increasingly popular in modern dental practice, particularly for treatment planning or comprehensive prognosis evaluation [1]. In dental applications, Cone-Beam Computed Tomography (CBCT) and Intra-oral Scan (IOS) are 3D imaging techniques widely used for surgical planning and simulation [1, 2, 3].

CBCT provides information on dental and maxillofacial structures, whereas IOS provides highly accurate surface information on tooth crowns and gingiva. In particular, segmentation of anatomical structures (e.g., teeth,pharynx, mandible) in CBCTs and IOSs and registration between these two modalities is an essential prerequisite for surgical planning for dental implants or orthognathic surgery [1, 5].

However, unlike natural images, since dental CBCT images are in 3D voxel format, classical state-of-the-art computer vision methods cannot be directly employed. Although the medical imaging community has proposed many solutions [6, 7], these methods are trained and verified on a small, often private, amount of data, and their per formances do not meet clinical requirements. The main objective of this workshop is to bridge the gap between computer vision communities and 3D dental image analysis, following the effort that we have made in the past three years with the organization of three MICCAI challenge series (6 challenges in total): ToothFairy, 3DTeethSeg/Land, and STS.

With this workshop, we aim to bring together researchers, practitioners, and broader audiences alike to apply the rigorous approaches to performance evaluation needed in the medical field and call for the latest results and techniques, which are often neglected by application-specific papers. As an example, in the latest edition of the ToothFairy challenge held at MICCAI2024, the best-performing methods all used nnUNet [8] as the core technique. Is this really the best we can do? Can we not explore alternative solutions and try different ways instead of sticking to “what we know will work”?

Through a combination of keynote talks, technical sessions, and panel discussions, we will foster collaboration, exchange ideas, and identify the unique challenges and opportunities in dental-related AI research.

Beyond the workshop, our vision is to establish a robust and sustainable community that convenes regularly, creating a platform to advance the field and drive impactful innovation.

Associated Challenge1

To achieve the previously mentioned goals, three challenges covering most of the topics of the workshop have been accepted as MICCAI2025 satellite events:2

  • ToothFairy3
  • 3DTeethSeg2
  • STSR
The workshop will also provide a valuable stage for presenting and discussing challenges and results with many experts in the field. The challenges aim to pave the way for future multimodal image analysis that can more accurately and efficiently inform clinical decision-making, from diagnostics to treatment planning and post-surgical evaluations. The contin ued advancement in 3D dental imaging, along with improved segmentation, registration, and integration of CBCT and IOS data, holds the potential to revolutionize the way dental procedures are performed, ultimately benefiting both patients and practitioners alike.

1More details about our challenges will be released soon!

2Please note that 3DTeethSeg2 and ToothFairy3 have been accepted as a joint challenge effort under the name of ODIN2025 - Oral and Dental Image aNalysis Challenges at MICCAI2025.

Collaborative Insights

A core part of the workshop is to provide a platform for exchanging insights, sharing ideas, and promoting fruitful collaborations. Thus, we intertwine all the sessions with talks by experts on the front lines and frequent intermissions for discussion to facilitate the exchange of ideas between participants and the speakers.

Topics

The primary aim of the ODIN workshop is to identify challenges in 3D dental image analysis, discuss novel solutions to address them and explore new perspectives and constructive views across the full theory/algorithm/application stack. The potential topics include, but are not limited to:

  • Algorithms and theories of IOS and CBCT registration;
  • Algorithms and theories of multi-instance segmentation in CBCT, IOS, and X-rays;
  • Algorithms and theories of landmark detection in different image modalities;
  • Algorithms and theories for 3D dental image analysis without full supervision, e.g., semi-supervised learning, active learning, and positive-unlabeled learning;
  • Algorithms and theories of cross-domain supervision for 3D dental image analysis, e.g., zero-/one-/few-shot learning, transferable learning, and multi-task learning;
  • Trustworthy artificial intelligence for 3D dental image analysis;
  • Interpretability and explainability of 3D dental image analysis algorithms;
  • Challenges in new learning paradigms of 3D dental image analysis (e.g., prompt engineering);
  • Efficient algorithms covering the previously listed topics in limited computational resources environments.

The workshop will accept submissions in long-form (8-10 pages) research papers, resource papers, and system papers.

Modality and Access

This website will serve as a central platform for disseminating the call for papers, promoting the workshop, and providing early access to the planned agenda and talk titles. This enables attendees to make choices about their attendance based on the content schedule. We will promote the workshop in advance in this website, via our social media channels, and through collaborations with industry and academic partners to attract a diverse community of researchers interested in dental image analysis.

References:

  1. Cui, Z., Fang, Y., Mei, L., Zhang, B., Yu, B., Liu, J., Jiang, C., Sun, Y., Ma, L., Huang, J., et al.: A fully automatic ai system for tooth and alveolar bone segmentation from cone-beam ct images. Nature communications 13(1), 2096 (2022)
  2. Flügge, T., Derksen, W., Te Poel, J., Hassan, B., Nelson, K., Wismeijer, D.: Registration of cone beam computed tomography data and intraoral surface scans–a prerequisite for guided implant surgery with cad/cam drilling guides. Clinical Oral Implants Research 28(9), 1113–1118 (2017)
  3. Jamjoom, F.Z., Kim, D.G., McGlumphy, E.A., Lee, D.J., Yilmaz, B.: Positional accuracy of a prosthetic treatment plan incorporated into a cone beam computed tomography scan using surface scan registration. The Journal of prosthetic dentistry 120(3), 367–374 (2018)
  4. Kim, S., Choi, Y., Na, J., Song, I.S., Lee, Y.S., Hwang, B.Y., Lim, H.K., Baek, S.J.: Best of both modalities: Fusing cbct and intraoral scan data into a single tooth image. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 553–563. Springer (2024)
  5. Rangel, F.A., Maal, T.J., de Koning, M.J., Bronkhorst, E.M., Berg´ e, S.J., Kuijpers-Jagtman, A.M.: Integration of digital dental casts in cone beam computed tomography scans—a clinical validation study. Clinical oral investigations 22, 1215–1222 (2018)
  6. Conze, P.H., Andrade-Miranda, G., Singh, V.K., Jaouen, V., Visvikis, D.: Current and emerging trends in medical image segmentation with deep learning. IEEE Transactions on Radiation and Plasma Medical Sciences 7(6), 545–569 (2023)
  7. Niyas, S., Pawan, S., Anand Kumar, M., Rajan, J.: Medical image segmentation with 3d convolutional neural networks: A survey. Neurocomputing 493, 397–413 (2022)
  8. Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods 18(2), 203–211 (2021)