STUDY: BLS better than ALS for trauma, stroke, respiratory distress

Fade to black.
 
I hope not. It's in our best interest to attempt to clarify when and why Advanced Life Support is a useful addition to BLS level care. If not it will be done for us by those that may not have any interest in Paramedics remaining a viable profession.
 
How is that a rebuttal? He doesn't refute any claim nor claim the study is flawed. He points out that no study can control for everything, that the study is worthy of consideration.

His point is well taken. If you cannot control for everything, you can either decide:

1. One cannot know anything with enough certainty to change practice based on evidence.
2. One can use the best evidence available to inform practice and further inquiry.

What one absolutely cannot do is choose option 1 when a study says something undesirable and option 2 when it matches a preconceived desirable position. There are some around here who are willing to do exactly this.

EMS has too many problems to use such mental gymnastics to road block potential progress.

Similarly, anyone who thinks this study supports eliminating ALS is also missing the point.

I would say its a rebuttal because a statement is included ""Their premise is flawed," said Howard Mell, a spokesman for the American College of Emergency Physicians and director of emergency services in Iredell County, N.C. He said ALS ambulances transport much more serious patients. "That's why they have much worse outcomes.""
 
and goes on to imply this is part of the reason for better BLS outcomes. Does the study address this?
No, the study does not. On scene and response times were not taken into account as I understand the study. I also do not have access to the full text of the article, which is very frustrating.

This study is crap for a number of reasons. Any retrospective data needs to be taken with a grain of salt and understanding that there are a lot of variables that affect patient outcomes that cannot be accounted for with most data sets available. This study used billing codes and did not account for actual interventions performed which could be a marker for patient severity. I know Summit and others have said that the authors have adjusted for patient severity but other articles have pointed out that the authors only used other billing codes to "adjust" for cormorbidities. While statistics gives more accurate results with larger data pools, the largest data pool on the planet is worthless if the data does not adequately represent the population being studied. I think most would find it difficult to figure out what a patient was going through if the only information they have is a bunch of billing codes. Plus, what was the reason that some of the patients were transported BLS instead of ALS? Knowing how the vast majority of EMS systems work by sending ALS to the sickest or every patient what is different about the BLS population? Were they less sick at the time of dispatch? Was this adjusted for?

This study generates more questions than answers. Statistics and evidence based medicine are very helpful within the constraints of common sense and expert opinion. Statistics are a blunt instrument and will spit out numbers/correlations/associations even if the data going into the equation is crap or the author's interpretation of the data is crap. My issue here is that a non-expert with no experience or understanding of prehospital or emergency medicine is reaching for a conclusion from incomplete data. To be very clear about what this study shows is that patients who receive ALS bills in a non-rural area and have medicare tend to have a higher mortality. I could conclude that ALS medicare bills increase mortality, but we all know that is crazy. Essentially that is what this study is doing, the study should conclude: ALS has higher mortality except for AMI but the causes are unclear and ALS may be a marker for patient severity or some unknown variable during ALS care is contributing to mortality and further study is warranted.

Common sense and other, more rigorous, studies tell us that ALS helps in some situations and hurts in others but is heavily area dependent. Most studies have shown that trauma care is better when it's BLS or no EMS in urban settings. Beyond that there is not much other data, even if there was it would have the caveat of only applying to EMS in the area studied. Paramedic and overall EMS quality varies by zip code and data collection and reporting by EMS is virtually absent. In an article that responded to this study an author cited a mere 14% compliance rate of ALS crews administering epinephrine every 3-5 minutes in cardiac arrest. If the quality of care from paramedics is poor then we need to examine that, a study comparing ALS and BLS bills and outcomes is not an adequate exploration of the topic.

We, as an EMS community, should not be settling for crap studies with insufficient data. We should be advocating for more comprehensive and mandatory data reporting to a state health office. Comprehensive data will let us actually do some higher quality retrospective studies that includes important clinical data points such as vital signs and interventions performed. This study makes an effort to examine something that warrants much closer examination but falls far short of actually providing any meaningful conclusions.

Analysis of the study by Dr Lacocque: http://epmonthly.com/article/back-to-basics/
Editorial in Annals of Internal Medicine by Drs. Sasson and Haukoos: http://annals.org/article.aspx?articleid=2456126
Paramedic compliance with ACLS drugs: http://www.ncbi.nlm.nih.gov/pubmed/16801287
 
See... Someone had the time that I did not. I agree with Expat and Shocksalot.

Where I didn't feel like elaborating to, let's be honest, mostly Remi, because I am tired of debating the color of the sky with him on every topic... Some one else stepped up to the plate.
We just never really agree on much, 'tis the way of the world. I anticipate someone will now delve into every line of text in search of a "gotcha" moment in effort to disprove an entire page long counter point.

I respect all parties, but also side with this study being an ineffective means of measuring patient treatment levels in the field. I feel this will fade to black, and nothing will change regardless of what findings are had. Maybe that makes me the pessimist, but look at EMS... We aren't exactly our own best friends.
 
This study is crap for a number of reasons.

Analysis of the study by Dr Lacocque: http://epmonthly.com/article/back-to-basics/
Editorial in Annals of Internal Medicine by Drs. Sasson and Haukoos: http://annals.org/article.aspx?articleid=2456126
Paramedic compliance with ACLS drugs: http://www.ncbi.nlm.nih.gov/pubmed/16801287

I have read quite a few analyses of this study now, and have not yet seen a single author articulate a problem with the way it was conducted, or point to any methodological or statistical flaws. Aside from the normal disagreement with the conclusions that you find with literally every published paper, the common theme in the commentary about this one - which I completely agree with and have stated at least once on this forum already - is that this paper has the same (gasp!) limitations as every other retrospective study. Primarily, that studies like this can not be used to show causality, because there are way too many unaccounted-for variables. And also, because no specific intervention or practice was investigated, it is impossible to use this study to change practice. Studies like this are foundational in that they cast a broad net and generate questions that can hopefully be researched in a more controlled and focused way. The fact that such a basic thing needs to be pointed out continuously to the prehospital world is a bit worrisome.

The fact that this study cannot show causality does make the study "crap" and is not a flaw. Is the fact that a Honda Accord can't pull a 12,000 pound trailer up a steep mountain grade a flaw? No; that isn't what Accords were designed for. They aren't intended to do that, so the fact that they cannot do that is not a flaw. The things that they are intended for, they are very good at, which is why no one who knows anything about vehicles would refer to a Honda Accord as "crap". The same is true of prospective vs. retrospective studies. They are different types of research that are done different ways and used for different things. Just because one is not the same as the other does not make either one "crap". If you purchase a Honda Accord expecting to use it like a 1-ton diesel and the car fails miserably, that is 100% your fault, not the fault of the Accord or the folks who designed it. A great apple makes a terrible orange.

In the commentary he provided for epmonthly, one almost gets the feeling that Dr. Lacocque views the lack of ability to show causality as a flaw. That misunderstanding (if it in fact exists - I could certainly be wrong) is unfortunate, but it is really irrelevant, because he also writes "While it is tempting to dismiss Sanghavi et al’s findings, they are in line with past research", and concedes that "Numerous studies have even corroborated Sanghavi’s findings, showing the lack of efficacy of out-of-hospital advanced airway use [6,7], vasopressin [8], IV drugs [9] and even ALS care as a whole [10]." He also describes the study as ".......a large, robust study, corroborated by others, and whose authors worked hard to control for every variable they could". Doesn't sound like he agrees with shocksalot that this paper is "crap".

Similarly, in their commentary for AIM, Sasson and Haukoos describe the paper as "well conducted" and write that it "raises important questions about the effectiveness of prehospital care". They then go on to explain - again - the limitations of a retrospective study and why it cannot be used to show a causal link between ALS care and worse outcomes. Even if they don't find this paper particularly useful and even though they disagree with the the authors conclusions, I doubt that, even if pressed to do so, these doctors would describe this study as "crap".

In fact, the only people I've seen referring to this study as "crap" and / or calling for outright dismissal of its findings are a few of the commenters on EMTlife who - and I say this with all due respect - don't know even the very first thing about research, and who's opinion is nothing more than an emotional reaction to the author's conclusions.

The truth is, no one here would be calling this study "crap" if it showed the opposite conclusions. Sandpit and Shocksalot, you would both be jumping for joy, gloating at the skeptics and pointing to this study as "evidence" that ALS works, and you wouldn't care in the least that it is "just" a retrospective statistical analysis and not actual research. It would probably never even occur to you to look into the basic study design.

As I wrote before, the findings of this study are compelling, both because of the sheer size of the study and because in several ways it repeats the findings of previous studies. There is probably something to it - exactly what, I don't know - and anyone who says otherwise is literally ignoring the facts.
 
Last edited:
Aside from the normal disagreement with the conclusions that you find with literally every published paper, the common theme in the commentary about this one - which I completely agree with and have stated at least once on this forum already - is that this paper has the same (gasp!) limitations as every other retrospective study. Primarily, that studies like this can not be used to show causality, because there are way too many unaccounted-for variables. And also, because no specific intervention or practice was investigated, it is impossible to use this study to change practice. Studies like this are foundational in that they cast a broad net and generate questions that can hopefully be researched in a more controlled and focused way. The fact that such a basic thing needs to be pointed out continuously to the prehospital world is a bit worrisome.

The fact that this study cannot show causality does make the study "crap" and is not a flaw. Is the fact that a Honda Accord can't pull a 12,000 pound trailer up a steep mountain grade a flaw? No; that isn't what Accords were designed for. They aren't intended to do that, so the fact that they cannot do that is not a flaw. The things that they are intended for, they are very good at, which is why no one who knows anything about vehicles would refer to a Honda Accord as "crap". The same is true of prospective vs. retrospective studies. They are different types of research that are done different ways and used for different things. Just because one is not the same as the other does not make either one "crap". If you purchase a Honda Accord expecting to use it like a 1-ton diesel and the car fails miserably, that is 100% your fault, not the fault of the Accord or folks who designed it. A great apple makes a terrible orange.

In the commentary he provided for epmonthly, one almost gets the feeling that Dr. Lacocque views the lack of ability to show causality as a flaw. That misunderstanding (if it in fact exists - I could certainly be wrong) is unfortunate, but it is really irrelevant, because he also writes "While it is tempting to dismiss Sanghavi et al’s findings, they are in line with past research", and concedes that "Numerous studies have even corroborated Sanghavi’s findings, showing the lack of efficacy of out-of-hospital advanced airway use [6,7], vasopressin [8], IV drugs [9] and even ALS care as a whole [10]." He also describes the study as ".......a large, robust study, corroborated by others, and whose authors worked hard to control for every variable they could". Doesn't sound like he agrees with shocksalot that this paper is "crap".

Again, unfortunately this research is being spoken about as if it implies causality - by the lead author herself, posted on YouTube, with the Harvard emblem emblazoned on it.


And also, because no specific intervention or practice was investigated, it is impossible to use this study to change practice. Studies like this are foundational in that they cast a broad net and generate questions that can hopefully be researched in a more controlled and focused way. The fact that such a basic thing needs to be pointed out continuously to the prehospital world is a bit worrisome.

If this study were more closely tailored to a specific clinical scenario or intervention, it could indeed be question generating. An example of such a paper would be a retrospective examination of the influence of various factors on the survival of prehospital trauma (or whatever) victims. One way to use such a study in a question generating way is to examine a multitude of factors and identify associations that deserve further investigation - e.g if ALS care, long extrication/transport time, and field intubation are all associated with bad outcomes, we now have a new impetus to use other methodologies to determine if there is a causal link between any of those three variables and poor outcome.

This paper used a retrospective/purely statistical method to try answer a question, not search for new questions. I think that is the center of many people's criticism, and seemed to be the center of the criticism of this group's previous paper which used similar methodology. "Question generating" studies aren't supposed to simply be a less-robust examination of an interesting question. As far as I can see, the only reasonable follow up to this paper is a more robust study asking the exact same question - "does ALS care lead to bad outcomes?"
 
I also do not have access to the full text of the article ... This study is crap

The only thing that is "crap" is your ability to form a valid opinion without reading the study.

This study used billing codes ... the largest data pool on the planet is worthless if the data does not adequately represent the population being studied.
How do you figure the data is nonrepresentitive? BECAUSE OF THE "BILLING CODES"?

Paramedics have 0 training in coding which makes it a convenient thing to harp on and feel good. The people who do understand think you look foolish for making that argument.

They used ICD9 codes to group patients by malady. ICD9 code assigned by the treating hospital is dependent on the medical diagnosis.

Then they grouped patients by BLS and considered two tiers of ALS care billing and those were based on interventions including presumption of the need for ALS care or assessment. Perhaps you think that billing departments won't bill the government for explicitly allowable reimbursements?

Injury Severity Scores were associated with each patient in the trauma group because the trauma ICD9 codes encompass a much wider range of patient acuity than say Respiratory Failure.

Knowing how the vast majority of EMS systems work by sending ALS to the sickest or every patient what is different about the BLS population?
The patient types addressed in this study:
AMI, RESPIRATORY DISTRESS, TRAUMA, STROKE
Every system would send ALS if they had it except possibly minor trauma, but those were controlled by the Injury Severity Score.


So if the patient got BLS, it is because there was no ALS. Not because of low severity. And this was verified by their data analysis.

Were they less sick at the time of dispatch? Was this adjusted for?
The study found that BLS patietns were typically older and had more comorbidities.

Common sense
Common sense used to tell use to give hemmorhagic shock patients infinite NS boluses to keep a pressure even as their blood turned translucent because patients need a pressure to correct hypoperfusion. Now common sense tells us we need to balance hypotension with exacerbating blood loss with too much IVF.

Other, more rigorous, studies tell us that ALS helps in some situations and hurts in others
That is what this study shows. What studies are you referring to?

Beyond that there is not much other data
Wait... you just said there were all these other studies?

Paramedic and overall EMS quality varies by zip code and data collection and reporting by EMS is virtually absent. In an article that responded to this study an author cited a mere 14% compliance rate of ALS crews administering epinephrine every 3-5 minutes in cardiac arrest. If the quality of care from paramedics is poor then we need to examine that, a study comparing ALS and BLS bills and outcomes is not an adequate exploration of the topic.
Here we agree on the frist part of your statement. The second part, you missed the point of this study, which is to IDENTIFY A PROBLEM. The study did not supply a solution (nobody said to eliminate EMS). There was some speculation on cause by looking at other studies, such as the one you reference.

Did the study find a problem? YES

Did the study find a solution? NO

What should we do? DETERMINE THE CAUSES WITH FURTHER STUDY SO THEY CAN BE CORRECTED

We, as an EMS community, should not be settling for crap studies with insufficient data.
The reduction of your reasoning is that it is literally impossible to study whether an ALS care system benefits the patient.

We shouldn't settle for pundits who don't understand research processes or even read the studies they want to discount for emotional reasons.
 
Last edited:
Here we agree on the frist part of your statement. The second part, you missed the point of this study, which is to IDENTIFY A PROBLEM. The study did not supply a solution (nobody said to eliminate EMS). There was some speculation on cause by looking at other studies, such as the one you reference.

Did the study find a problem? YES

What problem? This is the crux of the issue; saying they found a "problem" requires accepting that ALS care caused worse outcomes; and that requires accepting that the methodology used here says anything about causality.

If all the paper did is identify an association between ALS care and poor outcomes, well then OK, but that's not a problem that requires fixing, just like the fact that patients treated in the ED have worse outcomes than patients treated at an urgent care isn't a problem.
 
What problem? This is the crux of the issue; saying they found a "problem" requires accepting that ALS care caused worse outcomes; and that requires accepting that the methodology used here says anything about causality.

The study didn't conclude: "eliminate ALS."

A proposed causal link doesn't imply the specific causes. As another poster said, ALS isn't a monolithic thing. It is a conglomerate of interventions, providers, care philosophy and systems. The study says there is some thing(s) wrong with it. Both the studies AND THE DETRACTORS have given their opinions or provided studies indicating what some of those specific causes (and potential solutions) may be, whether provider quality, training, or methodology of care and transport!
 
The study didn't conclude: "eliminate ALS."

A proposed causal link doesn't imply the specific causes. As another poster said, ALS isn't a monolithic thing. It is a conglomerate of interventions, providers, care philosophy and systems. The study says there is some thing(s) wrong with it. Both the studies AND THE DETRACTORS have given their opinions or provided studies indicating what some of those specific causes (and potential solutions) may be, whether provider quality, training, or methodology of care and transport!

But that assumes that a nonrandomized retrospective trial with data derived from billing codes is sufficient to conclude that there is a causal link between ALS care and poor outcomes in the first place. I think you will find that conventional wisdom, especially among clinicians, is that such a methodology cannot completely control for confounders and thus cannot conclude anything about causality.
 
At the very least, it confirms issues we already knew existed and gives more...evidence, shall we say, to back the argument for more controlled trials.
It's an elegant study and confirms OPALS et al, so what's the problem?
 
But that assumes that a nonrandomized retrospective trial with data derived from DIAGNOSIS CODES AND billing codes AND MUCH MUCH MUCH MORE FROM A HUGE DATASET is sufficient to conclude that there is a causal link between ALS care and poorer outcomes than BLS for some patient types that ALS should have better outcomes for if the patients were comparable... AND THEY WERE

Fixed a few things... and those fixes are NOT the same as saying Urgent Care patients do better than ED patients as a generalized statement.

I think you will find that conventional wisdom, especially among clinicians, is that such a methodology cannot completely control for confounders and thus cannot conclude anything about causality.
You'll find that they did an excellent job of controlling and discussed what they couldn't, and the preponderance of likely biases were in favor of ALS, not BLS.

So, since you already admit that RCT is not acceptable here, you'll find that medical scientists and clinicians are willing to accept the suggestion of causality to the extent of looking for individual causal explanations for the system effect on outcomes found in this study.

Otherwise, you are simply saying, "I think that ALS should be awesome, and there is no way to test this assumption, and anything that says otherwise is inconclusive at best." There is no complimentary way to describe such thinking.
 
Fixed a few things... and those fixes are NOT the same as saying Urgent Care patients do better than ED patients as a generalized statement.


You'll find that they did an excellent job of controlling and discussed what they couldn't, and the preponderance of likely biases were in favor of ALS, not BLS.

So, since you already admit that RCT is not acceptable here, you'll find that medical scientists and clinicians are willing to accept the suggestion of causality to the extent of looking for individual causal explanations for the system effect on outcomes found in this study.

Otherwise, you are simply saying, "I think that ALS should be awesome, and there is no way to test this assumption, and anything that says otherwise is inconclusive at best." There is no complimentary way to describe such thinking.

Do you speak to people like this in real life? I don't think I've been anything but polite, and I expect the same from you, although you are an anonymous set of fingers somewhere in the internet.

Enjoy your discussion.
 
What problem? This is the crux of the issue; saying they found a "problem" requires accepting that ALS care caused worse outcomes; and that requires accepting that the methodology used here says anything about causality.

According to all conventional wisdom, ALS should show clearly improved outcomes among the sicker patients.

Considering that the very existence of the paramedic profession as we know it and much of the EMS industry relies on that assumption, I'd say that a large, well put together study showing that such a benefit may not exist - or worse - presents a pretty substantial problem, not to mention a host of potential research questions.

You sound as though you read quite a bit of research. If that is the case, then you are well aware that probably a large majority of published clinical research results in findings that are not immediately actionable but that contributes to the body of scientific knowledge, often forming the basis for further study.
 
Last edited:
Do you speak to people like this in real life? I don't think I've been anything but polite, and I expect the same from you, although you are an anonymous set of fingers somewhere in the internet.

Enjoy your discussion.
Hey no offense was meant but I definitely was refuting your position by b pointing out your incomplete representation of the study and re-presenting your logic in a way that illustrates its faults which is part of a spirited debate.
 
This is a hostile work environment.
 
This is a hostile work environment.

I love you, man.

i-know-that-feel.jpg
 
The study didn't conclude: "eliminate ALS."

A proposed causal link doesn't imply the specific causes. As another poster said, ALS isn't a monolithic thing. It is a conglomerate of interventions, providers, care philosophy and systems. The study says there is some thing(s) wrong with it. Both the studies AND THE DETRACTORS have given their opinions or provided studies indicating what some of those specific causes (and potential solutions) may be, whether provider quality, training, or methodology of care and transport!

“This study demonstrates that in medicine costlier isn’t always better; simply transporting the patient to the hospital as soon as possible appears to have a high payoff", states Newman, one of the study's authors.

This is a very bold conclusion to make from the study that was used. I think this is the issue that we have with the study. The methodology limits the studies ability to control for confounders regardless of the use of other billing codes to try and control for confounders (with the exception of the trauma sample which is likely well controlled by ISS). The only logical conclusion this study can reach is the need for more studies.

The lack of mandatory data reporting in EMS is really the biggest problem and makes it almost impossible to drive EMS care forward. Most research in EMS is limited and/or biased. A lot only applies to specific agencies or areas or is biased by voluntary data reporting and study participation (biases towards higher quality care/Hawthorne effect). Things have to change so we can use large data sets to really see where EMS is at across the country and how ALS care effects outcomes and differs between agencies and geographic regions.
 
Back
Top