AI Usage within EMS

DrParasite

The fire extinguisher is not just for show
Messages
6,370
Reaction score
2,231
Points
113
As AI is slowly taking over the world, companies are investing in projects with AI components (often at the expense of others), and I have spent the last 2 hours using Microsoft Copilot to enhance my file management powershell scripts, I was curious if anyone had any ideas for how EMS providers could take advantage of AI (Copilot, Bard, Gemini, or your AI chat bot of choice) to be more efficient at our job, or at the EMS system level.
 
I've been using Copilot more and more to consider medication selection, dosing, and interaction. If I were still in the field, I think I'd be doing the same as sort of a QA/QI exercise--i.e., review something I did or didn't do and explore how changing some actions might have improved the outcome. It wouldn't be formal research--just playing what-if, logging results and using them to suggest alternatives. Perhaps it could migrate from the individual to the system level with better procedures and oversight.
 
Have you tried Anthropic's Claude or API for coding? 🤌

My primary care doctors are now using AI it for charting.

It always amazes me after 20 years in EMS to see so many people arguing about EKGs. Surely AI can help with that. It looks like it already has.

System Status Management (SSM) was all the rage, I wonder if AI is able to improve models.

Real time translations are pretty amazing.

My non-EMS students use AI to review for tests (create a 50 question...). I use it daily for improving my lessons. This could really step up the quality of EMS instruction.
 
ImageTrend has launched AI into ems reporting. I’ve not tested it yet, but apparently you can voice dictate your narrative and it’ll populate treatments and vitals in their section based on what you say.

Their promo video looks impressive
 
I'm a trauma registrar for a Level 1 trauma center, so I read several EMS narratives every day. There are several smaller EMS agencies near us that are using AI to write their narratives. The last line is the credit line, with text like "This report was generated with AI tools". The narratives themselves are not that good, almost always simply a rehashing of all the checkboxes in the rest of the report. There isn't usually a lot of thought included about what the EMS provider was thinking, ruling out, or subjectively observing.
 
Have you tried Anthropic's Claude or API for coding? 🤌

My primary care doctors are now using AI it for charting.

It always amazes me after 20 years in EMS to see so many people arguing about EKGs. Surely AI can help with that. It looks like it already has.

System Status Management (SSM) was all the rage, I wonder if AI is able to improve models.

Real time translations are pretty amazing.

My non-EMS students use AI to review for tests (create a 50 question...). I use it daily for improving my lessons. This could really step up the quality of EMS instruction.
My MIL refuses to allow her MD to use AI for charting, they have to abide by patient wishes. Her MD and the staff always roll their eyes and manually type in the notes. I fully agree with her after I reviewed the AI charting taken the first time and found misspellings and incorrect labs and Dx info. This could have lethal consequences.

I also refuse to allow my MD to use AI charting. Write it down. You know, like you are supposed to…
 
My MIL refuses to allow her MD to use AI for charting, they have to abide by patient wishes. Her MD and the staff always roll their eyes and manually type in the notes. I fully agree with her after I reviewed the AI charting taken the first time and found misspellings and incorrect labs and Dx info. This could have lethal consequences.
That's been an issue for me, too. Best example: The summary of my last PCP visit included the condition that I lose four pounds. One of the reasons for my visit was unexplained weight loss.

Are we being helped more than hurt by AI? How are we supposed to know?
 
Have you tried Anthropic's Claude or API for coding? 🤌
My job only allows copilot... so I've played around with others at home, but for powershell, i have to stick with the M$ option...
My primary care doctors are now using AI it for charting.
after seeing some of the AI notes from my meetings, I don't know if I would want to do that, especially if a human doesn't reread and correct any errors.
It always amazes me after 20 years in EMS to see so many people arguing about EKGs. Surely AI can help with that. It looks like it already has.
how many people disregard the lifepak's diagnosis for a STEMI or any other cardiac issues? I would have thought objective criteria could easily be applied to an EKG...
System Status Management (SSM) was all the rage, I wonder if AI is able to improve models.
I hope not... SSM sucks... it sucked back in the day, and it still sucks. my previous agency had AI in their SSM 20 years ago... sometimes its predictions were right, other times, not so much... past performance doesn't guarantee future experience.
Real time translations are pretty amazing.
agreed... I can't wait to use it on a drunk patient who is slurring and speaking a language that I can't understand
My non-EMS students use AI to review for tests (create a 50 question...). I use it daily for improving my lessons. This could really step up the quality of EMS instruction.
As someone who used AI to design an upcoming EMS presentation.... it's a start... but all of the questions and layouts need to be reviewed and validated by a competent instructor. Remember, garbage in, garbage out....
 
Are we being helped more than hurt by AI? How are we supposed to know?
That's actually been one of my concerns... how much brainpower are we offloading to AI, and what happens when the AI fails? or hallucinates?

How much blind faith would medical providers put into AI, and how many old school providers would refuse to use AI? Would any one trust a surgeon who learned a procedure from youtube? What about an anesthesiologist who used AI for all of his or her drug dosages?
 
That's been an issue for me, too. Best example: The summary of my last PCP visit included the condition that I lose four pounds. One of the reasons for my visit was unexplained weight loss.

Are we being helped more than hurt by AI? How are we supposed to know?
Well, don't know if your "provider" used AI or not for your case summary, but in another scenario, the weight loss recommendation could very well have been a distracted and maybe even an ill prepared practitioner using an EMR template. It's all in who is using the tool.
 
That's actually been one of my concerns... how much brainpower are we offloading to AI, and what happens when the AI fails? or hallucinates?

How much blind faith would medical providers put into AI, and how many old school providers would refuse to use AI? Would any one trust a surgeon who learned a procedure from youtube? What about an anesthesiologist who used AI for all of his or her drug dosages?
This is a very real concern with AI. I would argue (and there is some evidence to support the idea) that despite all it's value and usefulness, the internet has already made us dumber in some important ways, and avoiding over-reliance on AI both in education and in practice will become increasingly difficult.
 
Well, don't know if your "provider" used AI or not for your case summary, but in another scenario, the weight loss recommendation could very well have been a distracted and maybe even an ill prepared practitioner using an EMR template. It's all in who is using the tool.
Which adds to the perceived mystery and unreliability of the tool.

Based on the error I mentioned plus others in that visit summary plus others in visit summaries by other clinicians, I can only assume that composition tools--perhaps as low-tech as word processors or as sophisticated as state-of-the-art AI--were used to produce inaccurate summaries. If I have to wonder who's using which tool and how well they're using it...well, that makes me nervous about semi-automated patient interviews.
 
As far as I know, AI is currently only capable of searching and limited interpretation of existing entered data. There's only three futures ahead of us

1. AI becomes capable of independent discovery, in which case we'll likely see Arnold Schwarzenegger travel back in time of sarah conner
2. The human race's intelligence dwindles, less new data is imported, and AI stagnates
3. The human race continues discovery and data entry, AI becomes better at mining and interpreting, and becomes more useful
 
When it comes to charting, AI can only write what it knows. If all you do is enter vitals and click checkboxes, that's what it "knows". The whole idea behind having a summary in the chart is to fill in the spaces around the checkboxes. That has to be done by the provider, not offloaded to an AI.
That being said, using AI to convert voice to text in a summary field - that makes sense and can speed up charting for providers who may not type well.

As far as other things, I like the idea behind Google's NotebookLM. It will only create content based on the files you give it. So, that would make it great for education review - enter the textbook and class notes, then ask it questions or have it make quizzes or a podcast to discuss the topics. Tell it - I have a quiz on chapter 12 this week, make a podcast covering the major points.
 
We have the "AI" enabled in ESO for PCRs. As mentioned above, it just compiles data from the rest of the chart. Put in that you gave 50mcg of fent, it'll generate "50mcg of fent given via IV". It's absolute garbage.
 
We have the "AI" enabled in ESO for PCRs. As mentioned above, it just compiles data from the rest of the chart. Put in that you gave 50mcg of fent, it'll generate "50mcg of fent given via IV". It's absolute garbage.
The current gen of AI (at least in ImageTrend) is an overhaul from that system. Instead of generating a narrative based on data found elsewhere in the report, it will backfill meds, procedures, disposition, etc based on what you enter in the narrative.
 
The current gen of AI (at least in ImageTrend) is an overhaul from that system. Instead of generating a narrative based on data found elsewhere in the report, it will backfill meds, procedures, disposition, etc based on what you enter in the narrative.
That seems like a smarter way to do it, provided it works as advertised
 
Thinking about AI in charting a bit more, I'd like to see AI being used in a different way. Have AI check your narrative against protocols to ensure you didn't miss something or put it out of order. Let is be a tool rather than trying to shoehorn the AI into replacing human input.
 
Thinking about AI in charting a bit more, I'd like to see AI being used in a different way. Have AI check your narrative against protocols to ensure you didn't miss something or put it out of order. Let is be a tool rather than trying to shoehorn the AI into replacing human input.
Would that actually improve narrative writing, or induce more reliance on an AI generated narrative with all of its inherent flaws?
 
Back
Top