This story about AI-powered transcriptions used in hospitals should be alarming to Chief Informatics Officers or anyone in Healthcare Administration. You know how when you are at the doctor's office, and they record your patient interview rather than take notes? Well, it turns out that the AI tools some organizations use to transcribe the content add significant hallucinations and then delete the original content, making it impossible to verify what you told the doctor!
You might get results like this if you ask your favorite AI chatbot. “AI hallucinations are when an artificial intelligence system, like a chatbot or a language model, generates information that seems plausible but is actually incorrect or completely made up. It’s like when someone dreams up a story that sounds real but isn’t based on reality.” I asked Microsoft Copilot that.
Your search might differ slightly, but I wouldn’t expect it to provide the wrong answer. It’s not the type of question that seems to cause challenges. In my experience, the more specific or obscure the question, the more ‘interesting’ answers one receives. Why so many problems with transcription? My guess is that even though these models are trained on large amounts of healthcare information, natural conversation about specific complex topics, such as one's health, is currently just too individual and complex.
We use it at Improv for process improvement and automation. Have you checked out Process Street yet? I use it to improve critical responses to emails. I love tools like Microsoft Copilot as a search replacement. I've even developed some custom GPTs to simplify my complex work world. There are myriad examples of how AI can improve our day-to-day experience, even in its current state. But that doesn't make it ready for complex project management, automated coding, government operations, and definitely not patient records! At least not out of the box, without some very intentional effort. The takeaway here is that AI can be a huge help. Just remember what you are dealing with. Educate yourself and prepare accordingly.
It WILL get there. You know it. I know it. It's inevitable. But not until it is at least as accurate as a medical transcriber of the human type. According to the article, "A University of Michigan researcher conducting a study of public meetings, for example, said he found hallucinations in eight out of every ten audio transcriptions he inspected..." YIKES!
So why are "over 30,000 clinicians and 40 health systems" using tools based on this technology? It's sexy, cool, and, most importantly, it saves a ton of time in this crazy busy environment. And after reading a recent LinkedIn article by @Taylor Borden it might not be unreasonable to assume that employees are “overwhelmed by how quickly their jobs are changing” and feel the need to learn and implement AI solutions rapidly. She goes on to say,
LinkedIn data shows that over the next five years, the skills required at work will change by 50% — and innovation in artificial intelligence is expected to push that figure to a whopping 70%.
Are we panicking? I know I did a little a year ago when AI seemed to be moving too fast. But maybe that’s just me.
I'd like to propose a different approach to today's standard of “hurry, install, run with it!” Let's use traditional IT methods with assessment, analysis, professional testing and built-in change management processes. It's not as daunting as it sounds.
Firstly, engage someone who understands healthcare, vendor selection, process improvement, and the current state of AI. Do what they tell you to do!
I've built a Process Street template to help you evaluate these tools. If you'd like access, let me know.
The outline below covers some key tasks often missing or underappreciated.
Once you have gone through the implementation process, it’s important to ensure a periodic review. AI changes daily. If you skip this step the most likely outcome is that the results will not be what you expect. So make it part of the project plan. Change management should never end and neither should validating your AI results. Good luck in your implementations and please let me know in the comments (or chat with me directly) how it goes!