1 conducted the only published study evaluating continuous speech recognition software. A number of software reviews have been published in the popular press and computer trade magazines (e.g., PC Magazine, Nov 1999), but none of these publications has provided a systematic comparison of continuous speech recognition software package performance. It is noteworthy that the majority of these studies evaluated discrete speech recognition software. A search of medical and psychological journal listings (using MEDLINE and PsychLit) revealed few published articles evaluating speech recognition software in health care settings. Software that converts the spoken word to text has been used in many specialized health care settings (e.g., radiology and cardiology). This study was undertaken in part to assess the potential use of speech recognition software in busy clinical settings without transcription support, prior to a decision on significant capital investment.īy the close of the 1990s, speech recognition software had become a potentially viable and affordable substitute for transcription, costing approximately $2,000 per workstation with software. Because of increased sharing of patient care across multiple facilities, VA New England is interested in evolving technology-based approaches to enhancing documentation of patient care in an electronic form. Medical professionals who do not have access to transcription services must type their own chart entries, which requires typing skill and significant amounts of time. Although the IBM software was found to have the lowest overall error rate, successive generations of speech recognition software are likely to surpass the accuracy rates found in this investigation.Ĭhanges in health care are increasing the demand for electronic records in large organizations. Additional training is likely to improve the out-of-box performance of all three products. Results of this study suggest that with minimal training, the IBM software outperforms the other products in the domain of general medicine however, results may vary with domain. The IBM software was found to perform better than both the Dragon and the L&H software in the recognition of general English vocabulary and medical abbreviations.Ĭonclusion: This study is one of a few attempts at a robust evaluation of the performance of continuous speech recognition software. Results: The IBM software was found to have the lowest mean error rate for vocabulary recognition (7.0 to 9.1 percent) followed by the L&H software (13.4 to 15.1 percent) and then Dragon software (14.1 to 15.2 percent). Measurements: Errors in recognition of medical vocabulary, medical abbreviations, and general English vocabulary were compared across packages using a rigorous, standardized approach to scoring. Objective: To compare out-of-box performance of three commercially available continuous speech recognition software packages: IBM ViaVoice 98 with General Medicine Vocabulary Dragon Systems NaturallySpeaking Medical Suite, version 3.0 and L&H Voice Xpress for Medicine, General Medicine Edition, version 1.2.ĭesign: Twelve physicians completed minimal training with each software package and then dictated a medical progress note and discharge summary drawn from actual records.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |