Evaluating Large Language Models for Automated CPT Code Prediction in Endovascular Neurosurgery.

Publication/Presentation Date

1-24-2025

Abstract

Large language models (LLMs) have been utilized to automate tasks like writing discharge summaries and operative reports in neurosurgery. The present study evaluates their ability to identify current procedural terminology (CPT) codes from operative reports. Three LLMs (ChatGPT 4.0, AtlasGPT and Gemini) were evaluated in their ability to provide CPT codes for diagnostic or interventional procedures in endovascular neurosurgery at a single institution. Responses were classified as correct, partially correct or incorrect, and the percentage of correctly identified CPT codes were calculated. The Chi-Square test and Kruskal Wallis test were used to compare responses across LLMs. A total of 30 operative notes were used in the present study. AtlasGPT identified CPT codes for 98.3% procedures with partially correct responses, while ChatGPT and Gemini provided partially correct responses for 86.7% and 30% procedures, respectively (P <  0.001). AtlasGPT identified CPT codes correctly in an average of 35.3% of procedures, followed by ChatGPT (35.1%) and Gemini (8.9%) (P <  0.001). A pairwise comparison among three LLMs revealed that AtlasGPT and ChatGPT outperformed Gemini. Untrained LLMs have the ability to identify partially correct CPT codes in endovascular neurosurgery. Training these models could further enhance their ability to identify CPT codes and minimize healthcare expenditure.

Volume

49

Issue

1

First Page

15

Last Page

15

ISSN

1573-689X

Disciplines

Medicine and Health Sciences

PubMedID

39853605

Department(s)

Department of Surgery

Document Type

Article

Share

COinS