Meeting Banner
Abstract #4862

Improving the Quality of Synthetic FLAIR Images with Deep Learning Using a Conditional Generative Adversarial Network for Pixel-by-Pixel Image Translation

Akifumi Hagiwara1, Yujiro Otsuka2, Masaaki Hori2, Yasuhiko Tachibana3, Kazumasa Yokoyama4, Shohei Fujita2, Christina Andica2, Koji Kamagata2, Ryusuke Irie2, Saori Koshino1, Tomoko Maekawa2, Lydia Chougar5, Akihiko Wada2, Mariko Yoshida Takemura2, Nobutaka Hattori4, and Shigeki Aoki2

1Radiology, The University of Tokyo Hospital, Tokyo, Japan, 2Radiology, Juntendo University Hospital, Tokyo, Japan, 3Radiology, National Institute of Radiological Sciences, Chiba, Japan, 4Neurology, Juntendo University Hospital, Tokyo, Japan, 5Radiology, Hopital Saint-Joseph, Paris, France

Synthetic FLAIR images are of lower quality than conventional FLAIR images. Here, we aimed to improve the synthetic FLAIR image quality using deep learning with pixel-by-pixel translation through conditional generative adversarial network training. Forty patients with MS were prospectively included and scanned to acquire synthetic MRI and conventional FLAIR images. Acquired data were divided into 30 training and 10 test datasets. Using deep learning, we improved the synthetic FLAIR image quality by generating FLAIR images that have contrast that is closer to that of conventional FLAIR images and fewer granular and swelling artifacts, while preserving the lesion contrast.

This abstract and the presentation materials are available to members only; a login is required.

Join Here