PiMPiC Framework

Sep 27, 2025·
Reza Karimzadeh
Reza Karimzadeh
· 0 min read
Abstract
Deep learning models for 3D medical image segmentation typically require large annotated datasets to achieve high accuracy. However, collecting such datasets is time-consuming, costly, and constrained by privacy regulations. Contrastive learning, a self-supervised technique, enables models to learn meaningful data representations without any labeled data. However, applying traditional contrastive learning methods to medical images is challenging due to the structural similarity of human tissues, which often results in false negatives when similar tissues are treated as dissimilar. Additionally, slice-wise contrastive learning approaches rely on relative slice positions to form positive and negative pairs, limiting generalization to 3D patches and requiring image preregistration. To address these issues, we propose two novel modules for contrastive learning-based pretraining of 3D segmentation. The first, Patch Intersection Measurement (PiM), estimates the overlap between two patches in the embedding space. The second, Patch Intersection Contrast (PiC), encourages embeddings of overlapping regions to align closely while pushing non-overlapping regions apart. Experiments on two datasets for pancreas and kidney cancers segmentation demonstrated that our method outperforms both the state-of-the-art (SOTA) and the baseline segmentation models. Notably, for pancreas segmentation, even when trained with only 5% of the labeled data, our method achieves 12% and 4% improvement in Dice score compared to the baseline and SOTA, highlighting its effectiveness in low-data scenarios.
Date
Sep 27, 2025 10:00 AM
Event
Location

Daejeon Conference Center, South Korea