Keywords: Language Models, Language Models, Retrieval-Augmented Generation(RAG)
Motivation: When medical staff lack clinical experience, they might pick the wrong protocols and parameters, leading to wasted time and resources.
Goal(s): We want to show that combining RAG technology with LLMs can help determine MRI protocols at a level matching seasoned professionals—all while keeping patient privacy intact.
Approach: Using clinical doctors' MRI exam requests as our benchmark, we compared accuracy across different experience levels and professions to see how cloud-based and local LLMs differ in performance.
Results: After adding RAG technology, the cloud-based LLM matched the expertise of experienced radiologists, while the local LLM reached accuracy similar to that of senior radiologic technologists.
Impact: We've proven that RAG-based LLMs are feasible for early MRI decision-making, offering a new tool for learning and error prevention. Cloud-based LLMs and local LLMs each have their strengths in accuracy and privacy, but neither is perfect just yet.
How to access this content:
For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.
After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.
After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.
Keywords