2

Special Session I

Call for Papers & Submission

Domain Adaptation and Generalization: addressing Distribution Shifts in Vision-Language Models and Medical Imaging Tasks

Domain adaptation (DA) has emerged as a critical paradigm for addressing distribution shifts-systematic discrepancies between training and deployment environments Between the pretraining domain (e.g., web-scale data or natural images) and the downstream/deployment domain (e.g., medical imaging or robotics). Many systems rely on frozen vision-language or vision-only models that are either used directly or minimally fine-tuned for domainspecific tasks. While traditional DA methods assume access to target domain data, recent advances have shifted focus to domain generalization (DG), which eliminates the need for target-domain adaptation (i.e., eliminating explicit adaptation to the target domain while training models to generalize via diverse source distributions) by learning models that generalize to unseen domains. This capability is especially vital for vision-language models (e.g., CLIP, Flamingo) and medical imaging systems (e.g., SAM, MedSAM, UNet), where shifts in imaging hardware, clinical protocols, or textual semantics can drastically degrade performance, with catastrophic consequences. Despite recent progress, current research remains confined to narrow settings: single modalities (e.g., 2D images), closed-set tasks (e.g., classification), and controlled domain gaps. Real-world applications, however, demand DA/DG methods that can handle multimodal shifts, open-world recognition, and resource-constrained environments-challenges this session aims to address. Vision-language models face unique challenges under domain shifts. Although pretrained on web-scale data, they often under-perform in specialized domains (e.g., medical imaging) due to misaligned visual-textual distributions. For example, a CLIP-based diagnostic tool may misinterpret radiology reports when deployed across hospitals with differing terminologies. Similarly, medical imaging models can struggle with scanner variability (e.g., MRI vs. CT) or demographic biases (e.g., under-representation of minority groups). While foundation models promise task-agnostic generalization, their robustness to real-world shifts remains underexplo a gap this session aims to address through novel algorithms and comprehensive benchmarks. Beyond technical advances, this session emphasizes societal and safety implications. DA/DG methods might be served that purpose if integrated with model calibration, fairness constraints, and uncertainty estimation to ensure reliability in high-stakes applications (e.g., healthcare diagnostics or autonomous driving). For example, a domain-adapted chest X-ray model should not only generalize across hospitals but also detect when its pictions are unreliable for underrepresented patient groups. We also invite work on label-efficient DG (e.g., self-supervised adaptation) and cross-modal generalization (e.g., aligning MRI scans with clinical notes), which are essential for low-resource settings.

Related topics include but are not limited to:

● AAlgorithms. Test-time adaptation, adversarial DG, and multimodal alignment (vision-language/medical).
● Benchmarks. Quantifying shifts in vision-language tasks (e.g., text-to-image retrieval) and medical imaging (e.g., cross-institutional datasets).
● Trustworthiness. Fairness, calibration, and bias mitigation under domain shifts.
● Applications. Case studies in healthcare (e.g., adapting pathology models to new labs) and autonomous systems (e.g., vision-language navigation in unseen environments).
● uncertainty. Uncertainty plays a central role in domain adaptation by guiding where and how a model should adjust when faced with distributional shifts.
By bridging theoretical advances with real-world needs, this session will catalyze progress toward deployable, equitable, and safe AI systems-critical for domains where failures carry severe consequences.

Chairman: Dr. Tahir Qasim Syed, IBA Karachi, Pakistan (E-mail: tahirqsyed@gmail.com)

Bio: Tahir Qasim Syed received his Ph.D. in Computer Vision from Universit´e d’Evry Paris-Saclay, France, under the advising of Prof. Vincent Vigneron, where he focused on medical imaging and statistical learning. He is currently an Assistant Professor at the Department of Computer Science, Institute of Business Administration (IBA) Karachi, Pakistan. His research interests include representation learning, self-supervised learning and statistical inference, with particular focus on robust and scalable algorithms and models e.g. those for domain adaptation. He has also contributed to research on text recognition in natural images and gesture recognition using depth data and probabilistic models. Dr. Syed also serves as a reviewer and technical committee member for international conferences (e.g. NeurIPS, ICML, ICLR, AAAI) and journals (e.g. NN, TNNLS) in the fields of machine learning.

Chairman: Dr. Vincent Martin Vigneron, Universit´e d’ Evry Paris-Saclay, France (E-mail: vincent.vigneron@univ-evry.fr)

Bio: (Ph.D. 2000, Habil. 2007, University of ´Evry) began his career as an engineer at the CEA before joining the Computer Science Department at ´Evry as an Associate Professor in 2004. He served in that role until 2013, then became Dean of International Affairs at Paris-Saclay University from 2013 to March 2017, where he helped found the European University of Global Health (EUGLOH). Concurrently, he conducted research at the University of S˜ao Paulo and UNICAMP (2013–2016). Since that past decade, he has been Full Professor of Computational Mathematics, and Engineering at ´Evry and leads the SIAM team (EA 4526) within the IBISC Laboratory. His research focuses on machine learning and source separation for personalized medicine (especially brain imaging and neurological disorders) and autonomous vehicles. He has also served as Program Chair for international conferences including STATIM, ICA, and ICNSC.

 

Submission Guide



Submission Deadline: August 25, 2025

Submit your contributions via Electronic Submission System: https://easychair.org/conferences/?conf=icmv2025( .pdf only) . Apply an EC account if you don't have. Then login the EC, choose the special session I. Any qestions, please mail to secretary@icmv.org.


"We sincerely invite you and your colleagues immediately mark this event on your calendar and make your plans to Paris, France!"
Copyright © 2025 18th International Conference on Machine Vision (www.icmv.org)