Bradley P. Sutton1, Andrew Naber1, Jason Wang1, Jamie L. Perry2, David P. Kuehn3
1Bioengineering Department, University of Illinois at Urbana-Champaign, Urbana, IL, United States; 2Department of Communication Sciences and Disorders, East Carolina University, Greenville, NC, United States; 3Department of Speech and Hearing Sciences, University of Illinois at Urbana-Champaign, Champaign, IL, United States
As the frame rate of dynamic speech imaging with MRI increases, automated extraction of frame-by-frame soft tissue movements becomes critical for evaluating large studies of pathology or cultural differences in movement. This is a challenging task as dynamic MRI suffers from noisy images and lack of contrast in structures. We present a semi-automated algorithm to extract two tongue positions (tip and dorsum) and compare the tracking results with three trained speech scientists. The semi-automated algorithm performs well in correlating with the manual tracings on data from four study participants.