Meeting Banner
Abstract #3408

A Generative Whole-Brain Segmentation approach for PET/MR imaging via deep learning

Wenbo Li1, Zhenxing Huang1, Hongyan Tang1, Yaping Wu2, Yunlong Gao1, Jianmin Yuan3, Yang Yang4, Yan Zhang4, Na Zhang1, Hairong Zheng1,5, Dong Liang1,5, Meiyun Wang2, and Zhanli Hu1,5
1Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China., shenzhen, China, 2Department of Medical Imaging, Henan Provincial People's Hospital & People's Hospital of Zhengzhou University, Zhengzhou 450003, China., Henan, China, 3Central Research Institute, United Imaging Healthcare Group, Shanghai 201807, China., shanghai, China, 4Beijing United Imaging Research Institute of Intelligent Imaging, Beijing, 100094, China., beijing, China, 5Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen 518055, China., shenzhen, China

Synopsis

Keywords: Segmentation, Brain, Generative medical segmentation,Cross attention

Motivation: Segmentation of brain tissues plays a significant role in quantifying and visualizing anatomical structures based on PET/MRI systems.

Goal(s): However, most of the current methods focus on MR images with high soft-tissue contrast or employ MR images as a priori information to increase the segmentation accuracy of brain PET images.

Approach: In this paper, we proposed a generative whole-brain segmentation model for PET images to achieve automatic and accurate segmentation.

Results: The numerical experimental results demonstrate that the proposed method can incorporate multimodal information with the efficient and accurate segmentation performance achieved, allowing for better visualization and quantification results.

Impact: This study enables precise brain PET segmentation without MR data, benefiting clinical diagnostics and neuroscience by advancing our understanding of brain metabolism and activity, potentially leading to new therapies and improved patient outcomes.

How to access this content:

For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.

After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.

After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.

Click here for more information on becoming a member.

Keywords