Deep neural network-based approaches for segmenting medical images heavily rely on having access to extensive amounts of annotated data, a task that can often be challenging due to time constraints, logistical complexities, and the need for specialized expertise. This is particularly true in the medical imaging domain, in which collecting and annotating enough data to cover real-world variation is an unrealistically time-consuming and costly solution. In recent years, Disentangled Representation Learning (DRL) has been proposed as an approach to learning general representations even in the absence of, or with limited, supervision. A good general representation can be finetuned for new target tasks using modest amounts of data, or used directly in unseen domains achieving remarkable performance in the corresponding task. Our intention is to investigate the feasibility of employing DRL frameworks for multi-modal and cross-modal segmentation. In this specific task, at least two modalities are required (e.g. CT and MRI scans) and the goal is to accurately predict the segmentation mask exploiting all available modalities. This would facilitate the training of a cross-modality segmentation network in a scenario where annotated data is scarce in the target domain (e.g. MRI scans), by leveraging an annotation-rich source domain (e.g. CT scans). While DRL techniques have already found application in this context, there remains significant room for improvements: the current approaches lack in interpretability, making it hard to understand which representations were considered to obtain the final segmentation mask, and require a bi-directional setup (i.e. they require both modalities to be present for each patient also in inference).

Objectives of the research project include:

Designing and exploring improvements (e.g., incorporating diffusion models into DRL) for cutting-edge DRL multi-modal segmentation methods. Training and conducting comparative analysis on multi-modal segmentation models using real-world datasets (e.g., the MM-WHS: Multi-Modality Whole Heart Segmentation dataset).

What we expect from you:

A serious work ethic with a proactive attitude, with willingness to engage in a research project and understand the technical question. Knowledge of deep learning and experience with python and deep learning frameworks, from courses and projects. We don’t require proficiency, you will have plenty of time to learn! We are also looking for interest in publishing the final results in international medical imaging and/or deep learning conferences.

What you can expect from us:

An exciting and relevant AI research topic in one of the best cancer institutes in the world. You will work in the Radiology Department of the Netherlands Cancer Institute, a multidisciplinary environment, where you will collaborate closely with medical professionals. In particular, you will integrate the AI for Multi-center Data research team. You will have the possibility to explore and come up with your own ideas, while being under the direct supervision of a PhD student and the PI of the research line, Dr. Wilson Silva. The research line has its own biweekly group meetings, technical journal club and 1:1 meetings. You will also participate in the department meetings. You will be granted access to the research high performance facility.