It's funny...I just got an e-mail that there's going to be a 40 minute round-table with the author at this years ACEP conference next Monday. I'm sorry I can't go....
I've never suggested - or even seen anyone else suggest - that pure statistical methods can show causality.
On a careful re-read, the paper does indeed dance around implying causality. The author does really heavily suggest she is describing a causal relationship in her presentation posted on youtube, however....she even goes as far as to imply that her primary question is "how does ambulance type
affect survival.
She also says
"if all of these patients instead got basic life support than I estimate that an additional 15 people would live to at least 90 days."
"This is a big problem, because currently if we call 911 we get the advanced ambulance."
"so how is it that advanced is worse than basic ambulance."
"Using data and statistics we can study causality in real world settings that are otherwise difficult to replicate in experiments"
Very heavy on the implying that ALS is the cause of the bad outcomes....
There is no way to do an RCT on a sample of tens of thousands of patients on BLS vs ALS. So retrospective studies like this are done to identify new research questions. Again, they are not intended to show causality or to change practice.
Fair, if this were a question generating piece of research. I'm sorry, but I really see the author's presenation of the work as implying causality, and implying that the work implies that we need to start hacking away at ALS level care. The author's responses to critical letters after their first paper suggest the same.
The more I think about this piece of research, the more I dislike it. There is no good research question that comes out of this. It isn't an attempt to establish equipoise to justify an RCT of ALS vs BLS. It has too many topics to really evaluate any patient population or intervention, so we end up with a poorly elucidated skim job.
My other problem is that I get heavy overtones of financial savings in the papers and her presentations (and her CV); and the quality of the science here is nowhere good enough to make money-saving (aka resource slashing) decisions.
I get a bit too much of a feeling that the author is happy she found the results that she did. It seems likely to me that a career would get more mileage out of demonstrating to medicare how they can save money, so I really start to wonder about the underlying biases of the author. I suspect there is more incentive to generate research which finds expensive interventions not useful, which makes me a bit more skeptical from the get-go.