Meeting Banner
Abstract #4148

nnMU-Net:A self-configuring neural network architecture with multi-modality information fusion for medical MR image segmentation

Yuhan Liu1, Yu Liu1, and Haolin Zhan1
1Department of Biomedical Engineering, Hefei University of Technology, Hefei, China

Synopsis

Keywords: Segmentation, Segmentation

Motivation: Leveraging multi-modality information to improve the accuracy of magnetic resonance image (MRI) segmentation plays an important role in clinical application.

Goal(s): A general self-configuring neural network structure is developed for accurate medical image segmentation, which could leverage information interaction of multi-modality MRI and automatically configure parameters to accommodate different modalities.

Approach: U-Net is adopted as a backbone with self-configuring modifications, and each modality is independently input and then fused together after each encoder/decoder stage to maximize feature utilization.

Results: It allows one to automatically configure parameters to accommodate different modalities and improves the segmentation accuracy compared to existing methods.

Impact: Our method leverages the features from multi-modality MRI to improve the segmentation accuracy and can automatically configure itself, including preprocessing, network architecture, training and post-processing for different modalities. Also, it provides the application possibility for other multi-modality segmentation tasks.

How to access this content:

For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.

After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.

After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.

Click here for more information on becoming a member.

Keywords