A foundation model-driven multi-view collaborative framework for semi-supervised 3D medical image segmentation

Back to news list

Source: Frontiers Medicine

Original: https://www.frontiersin.org/articles/10.3389/fmed.2025.1744097...

Published: 2026-01-12T00:00:00Z

The study presents a new method for the automatic segmentation of three-dimensional medical images that combines a basic SAM model with multi-view learning. The method is designed to solve the problem that creating high-quality annotations of medical images at the level of individual voxels is time- and labor-intensive. The framework uses semi-supervised learning, which means it works efficiently with a small amount of labeled data combined with a large amount of unlabeled data. A collaborative fusion module combines information from axial and coronal sagittal views to improve understanding of the three-dimensional structure. Experiments on MRI images of brain tumors and PET images of the heart showed that the proposed method outperforms existing semi-supervised approaches. The method also shows good transferability between different types of medical imaging modalities. The result is a scalable solution that reduces the need for manual labeling of medical images while maintaining high accuracy of organ and tumor segmentation.