Archive for the 'critical appraisal' Category

Diuretic Strategies in Patients with Acute Decompensated Heart Failure N Engl J Med 2011;364:797-805.

This paper should be kind of a big deal I think.

We treat a lot of heart failure, and we don’t really know how to treat it. We give lots of drugs to make you pee, to “dry” out the lungs but we have no real evidence that it works or does anything.

These guys tried to answer a bit of the question.

Allowing that we’ll give them diuretics even without evidence, what about the dose (high vs low) or the mean by which we give the drug (bolus vs infusion)

Enrolled sickish CHF patients but excluded the real ICU cases. They enrolled them once in the hospital, not from the ED.

Low dose was their daily, chronic dose but in IV form, and high dose was 2.5 times their normal daily dose.

The fascinating thing was that they found diddly-squat of a difference no matter what treatment they got. Not only was there no difference in their primary outcome (a basic “do you feel better at 48 hours” but there wasn’t even any difference in harm! If we gave them 2.5 times the does, the kidneys did pretty much the same.

When I rooted around in the supplementary appendix for the mortality rate (and why wasn’t it in the paper?) it was roughly 15% at 60 days for all 4 groups.

This should really beg the question – if it doesn’t make a difference how or how much we give of it should we be doing it at all?

Of course us empiricists have been asking this for a while, we just don’t have the answer yet.

Over my 6 years working, i’ve changed a lot in my use of diuretics, from low dose, to mega-dose, to almost homeopathic dose these days. It’s hard to admit a patient with CHF without giving some, people look at you funny.

Advertisements

Urine Test Strips to Exclude Cerebral Spinal Fluid Blood West J Emerg Med. 2011;12(1):63-66

When I did my elective in South Africa we used to diagnose TB in the casualty dept. by sticking a cannula in the pleural effusion, sucking out some fluid, diluting it 1:9 with tap water and then dropping some on a urine dipstick. If you had 3+ of protein you had TB.

These guys did a somewhat similar idea with CSF samples looking for blood.

Absolutely cracking idea for a study.

Few problems (which they acknowledge)

– they looked at whatever CSF came into the lab that wasn’t grossly xanthocromic. Lots of these weren’t for SAH and indeed they don’t tell us how many were +ve for SAH

– it’s not clear if time delay to testing would be relevant. They tell us it was within a week but it’s not clear if it’s significant

They found moderate (about 90%) sensitivity and specificity in the 50s. Not good enough for SAH in my opinion. Gold standard was spectrophotometery here, which is better than a guy in a lab holding up the tube against a sheet of white paper deciding if it looks a bit yellow or not.

Worth some follow up study

PS the Werstern Journal of Emergency Medicine has some nice stuff but doesn’t appear to be on PubMed?

Risk score to stratify children with suspected serious bacterial infection: observational cohort study Arch Dis Child 2011;96:361–367

What these guys looked at is a real challenge. How do i tell if the kid in front of me has just “the snuffles” or is in the early hours of something terrifying like a pneumococcal sepsis?

They do what everyone does these days and try to come up with a prediction “rule” that you can type into your iPhone and tell you what to do with your patient.

This could be a poster child for a badly done derivation set. Or let me take that back, the derivation was well done, the variables they chose to look at where silly.

They used terms like SBI=α+β1X1+β2X2+β3X3 so they must know what they’re doing right… right?

Serious Bacterial Illness (SBI) was defined kind of weirdly. They suggested an SBI was a hosptial admission PLUS one of the following. Before getting to the following…

How can a hospital admission be neccessary in a defnition of SBI? The kids that got admitted got admitted because someone thought they were sick enough – the reasons why (which are likely many) are not recorded and as a result it becomes useless in trying to “derive” a rule.

Anyhow.

So you had to be admitted PLUS some of the usual sensible things like pneumonia or pus or something like that, but they also included CRP>120 or WCC>20. So if you got admitted to hospital with a CRP>120 you apparently had an SBI. This is qute frankly nonsense. How can you have a definiton of Serious Bacterial Illnes that needs no reference to bacteria!!! You could be counted as SBI in this study if you had Still’s disease…

Sorry for getting all high-pitched and exasperated here but this stuff is really important. No matter what you do with your logisitic regression after this, you’re not gonna be able to answer the question you started with.

They recruited 2000 kids to this study. Wow that’s lots of kids surely they’ll find lots of cool things?

Unfortunately not.

Only 74 (or 3.8%) of the kids had SBI (by their definition) and remember that their definition will tend to oversetimate the SBI.

With an event rate this low it’s hard to say anything meaningful in terms of useful identifying features. Not that that stops them doing just that.

In terms of the 74 SBI kids, most had pneumonia – that’s pretty much expected. What is a little bit odd is the low rate of UTI’s. In lots of these kiddy sepsis studies UTI is way up the list and makes up lots of their numbers (and remember UTI, even a sick kid with a UTI isn’t the same as pneumonia or meningococcal sepsis) yet here (with a generous definition) it wasn’t.

As mentioned above, they calculate sens/spec and AUC and all kinds of numbers that are “accurate” but not in the slightest bit useful.

One bit is worth a quote

Apart from tachypnoea (sensitivity 71.6%), the sensi- tivity of most clinical signs was poor.

Does this mean they think a sensitivity of 71.6% is actually good?

The come up with a “rule” and I’ll spare you the details but guess what? A sick kid looks just like you’d expect a sick kid to look like.

We could do with putting our energy into teaching ourselves how to spot a sick kid with the much derided “clinical judgement”.

This isn’t much use to us.

PS by their numbers and definitions, if i just sent home all 2000 of those kids without doing a thing i would have been right 96.2% of the time. Worth noting.

PROGNOSTIC VALUE OF THE DUKE TREADMILL SCORE FOR EMERGENCY DEPARTMENT PATIENTS WITH CHEST PAIN The Journal of Emergency Medicine, Vol. 39, No. 2, pp. 135–143, 2010

This paper deserves a rant, just for the sake of its ridiculous use of numbers

Most people who come to an Emergency Dept. with chest pain do absolutely fine in the long run

A small number will be having/had a heart attack. we can usually pick up these pretty well.

Some people have chest pain but not heart attack but go on to have a big heart attack over the next few months. These are the tricky ones (and unfortunately there’s a lot of them). They look well, their tests tell us they haven’t had a heart attack but the question is are they at big risk for having one in the next few months.

We have no good test for this. No matter what people might say, we don’t.

Our gold-standard test has become the angiogram, where we use dye and x-rays to look at the lining of arteries to see if they’re narrowed. While useful, it still doesn’t tell us if someone is going to have a heart attack in 2 months.

So in this slightly grey area we have to work out what’s best to do.

There is big, big money in this for someone who can work it out. And we’re already throwing big money at it.

One of the tests that has been around for a while now is the exercise stress test (EST) where we get people to run on a treadmill while we take an ECG to see if we can induce angina. Hardly the most hi-tech but hey…

It certainly is +ve more often if the person is going to have a heart attack in the next 30 days, but it’s not good enough for us to make a decision on. If all the test gives us is enough info to guess, then maybe we’re just better guessing without the test – in other words clinical judgement.

This paper took 170 of the kind of patients we’re interested in. In the ED with chest pain and an ECG that doesn’t make a decision for us and a troponin that tells us they haven’t had a heart attack.

They all got an EST and they used the Duke scoring system to stratify them low, medium and high risk.

They followed them (not in a creepy way) for 30 days to see if they had an adverse event.

And this is where it gets a bit dubious. I care about whether the patient dies or has a heart attack in the next 30 days. And they measured that, but they also measured if people got an angiogram and 1) that’s not really an adverse event in the same sense, and 2) it’s a bit subjectve; someone has to decide to do the angio, it’s not like it just happens spontaneously as part of the natural history of the disease

So this skews all their figures. They found a 3.5% adverse event rate and guess what – it was largely made of angios. Only 2 people had an MI in the next 30 days.

Especially seeing as most of the angios occurred while the patient was in hospital not when they were rushed back in a week later

With such a low adverse event rate it makes a farce of going on to calculate sensitivity and specificity, which they do anyhow.

Even more farcical is the dreaded -ve predictive value. Very basically this is the percentage chance after the test that nothing bad will happen to the patient.

They calculate it as 99.2%.

Which is nonsense. In their cohort if you simply sent them all home without the EST the percentage chance of them not having a heart attack in the next 30 days would have been 98.8%

Beware the -ve predictive value

They conclude wonderful things about their results and suggest that the EST is useful.

Did I mention that it was sponsored by a medical diagnostics company…

 

Compression Ultrasonography of the Lower Extremity With Portable Vascular Ultrasonography Can Accurately Detect Deep Venous Thrombosis in the Emergency Department Ann Emerg Med. 2010;56:601-610

[I’m still reading the EM literature fairly avidly, i’m just not posting quite as much as I did]

I’m a bit of an ED USS fan. For the simple things. Keep it simple and we’ll not cock it up. I promise.

Some in the radiology are (understandably) concerned about letting us play around with the higher frequencies. I agree, lots of these concerns are genuine but some are just nervous titters over who own which turf.

Every other specialty in the hospital seems to get free (mainly unaccredited) reign with the USS machine. In EM there’s a lot of work going into this so it’s kind of an inevitability that we’ll be waving an USS probe at you in the near future.

Anyhow

DVTs are dull and largely uninteresting. But some of them are probably important. Not nearly as many as we think mind you, but definitely some of them. There is a movement towards abandoning imaging anything below the knee for DVT. If you look you’ll find them, it just seems that they don’t mean anything.

The radiology dept. in our place seem to be mainly doing this.

This study sought to prove that we can do this (very simple, come on admit it…) imaging study as well as the radiology techs can do it with very little training.

This place had 60000 patients a year and 60 docs (of note we see 78000 a year and have about 25!) of whom 45 docs did the enrolling. Most were middle-grades with some (but not much DVT) USS experience.

The enrolled everyone who they had enough concern to  order an USS in the radiology dept.

They did a 2-point compression exam (femoral and popliteal only) and compared this with the radiology exam (proximal limb only but not a 2-point compression).

And they got the same results (about 25% were positive overall) as the radiology dept did.

[They actually did better as the ED called one scan +ve that the radiology called -ve yet it turned +ve when they repeated it a week later!]

I think this is pretty compelling stuff. If we can finally grow the balls to stop worrying about below knee DVTs (unless there’s some other reason to worry about them) and get some basic, universal USS training and culture going in the ED (which the college are pushing for pretty well, even if Northern Ireland isn’t quite on the ball yet) then we can make this whole thing a lot less hassle for both us and our patients.

Effect of delayed lumbar puncture on the diagnosis of acute bacterial meningitis in adults EMJ 2010:27:433-438

The basic overview for everyone

Meningitis is a bad thing. The bacterial one at least. Around the world it kills kiddies in droves. It’s a big deal. Thanks to vaccines and antibiotics it is not so much of a big deal (as in common) where I live. When it happens it’s a terrifying disease it just doesn’t happen that often. Even more so since the pneumococcal vaccine.

The classic (headache, photophobia, neck stiffness) presentation is no longer considered classic because we see it so rarely that we now only see the really hard cases – the kid with the fever and a sore throat who’s dead by morning. In the third world you’ll see the classic presentation all the time.

The test we do to make the diagnosis is the lumbar puncture – the one they do on House in every episode. And as tests go it’s not bad.

Very rarely, and mainly more than 40 years ago, you would put the needle in to take some fluid and the patients brain would squeeze out of his skull from the pressure change. This is as one would imagine a bad thing.

We are all terrified of this happening to us when we do the test (like most things in medicine we are more scared of doing harm than we are keen to do the right thing) so we often get a scan of the patients brain first to see if there’s anything obvious like blood or an abscess in there that might make the patients brain squirt out.

Rarely does this scan do anything other than make us feel better. It is rarely helpful.

As a result we delay doing the useful test so we can do a less useful one.

Most of the time these days we have enough common sense to start treating the patient before we do any tests. Treating bacterial meningitis quickly is one of the few diseases where treating it early makes a genuine difference.

Now I think these guys conclusions are mainly right but I’m not sure the study they’ve published gives them much evidence to say it.

The more complex critical appraisal bit

They tried to look at people who had meningitis and see how long they waited for a lumbar puncture and why they waited and what impact this had on how they did. But there are lots of problems.

– it’s a chart review – they looked at notes and decided what was wrong with the patient from there. Which can be useful but often you can read the chart whichever way you wish to prove your point. What do you do with missing data? What if someone made a decision on data that wasn’t written down? If you don’t tell how you decided what the chart said then everything that follows is dubious

– the patients they choose to look at are those pulled by discharge coding – so you only get in the study if someone thought you had meningitis and wrote it down. This misses those who had meningitis but no one made the diagnosis and thought they had something else. The famous paper on how to do a chart review is here.

– they excluded people who didn’t get a lumbar puncture – this was 10% of their patients. This is a big problem as there was probably a reason why they didn’t get an LP and so you can’t make statements about all meningitis patients, only the ones who got an LP. Though one would question the validity of a disgnosis of meningitis that doesn’t involve an LP.

– they questioned and reviewed the diagnosis of meningitis – they may well be right that some of the people who got coded as meningitis weren’t meningitis but you can’t do this with a chart review with their methods

– the gold standard seemed to be the British Infection Society guidelines – as with most guidelines these are often evidence light (there’s usually not much evidence in existence). I am aware of no evidence that shows that following the guidelines saves lives even though I agree with the guidelines in most respects.

– they do statistical analysis on symptoms of small numbers of patients from a chart review – this bit is completely pointless – when I think someone has meningitis my aim isn’t to ensure whether I’ve written down whether they vomited before coming in, I ask, but that doesn’t mean I write it down so you can’t find it out retrospectively.

– they try to make firm statements about whether or not a patient should have had a CT based on what was written on the chart when in reality there are lots of reasons, many logistical, why this happens or not.

– they note that none of the patients (they selectively chose) had LP prior to antibiotics even though “antibiotics are immediately started after the LP is performed, or sooner if there is a delay of more than 30 mins”. This is kind of weird as the reason they didn’t get the antibiotics before LP may have been that the docs thought there may have been a >30 min delay (for whatever reason) in which case they were following guidelines.

We do lots of needless CT scans – the most useful point I found was that of the 62 patients (two thirds of all) who got scanned none had anything to worry about on CT.

I agree with these guys that we miss valuable information by sending people for silly scans that take too long to move the patients for and get a report and make a decision(not the scan itself which is seconds). I agree that the 4 hour target has caused problems here [though in recent news we appear to be scrapping it]. I agree that we’re all too scared to do the right thing. Unfortunately this isn’t great proof of that.

Diagnostic and prognostic utility of troponin estimation in patients presenting with syncope: a prospective cohort study Emerg Med J 2010 27: 272-276

The basic overview for everyone

About 2pm every sunday we have a wee run of ambulances of people who have fainted/passed out/fallen over in church. Religion is bad for you. We can all agree on that.

99% (Warning – made up number alert) of these are what we call syncope, or if we try to be more technical and make it sound like a real disease we say neurcardiogenic syncope of vasovagal. This is largely to make doctors and patients feel better that “I thought it was a faint but I went to the hospital and said it was neurocardiogenic syncope…”

We like to make light of syncope (if you’ll pardon the pun) but in reality it can be tricky. The vast majority of people who fall over, collapse, or have something like a faint then that is exactly what it will be – a faint, a funny turn and of no further consequence. Unfortunately some won’t be. Some will have horrible things happen to them.

As you can imagine most of emergency medicine is like this.

We go to medical school and learn nothing and graduate as doctors and screw up loads and eventually learn something and after a while we get pretty damn good at working out which are just faints and which mean lots of badness.

Some call this clinical gestalt. I figure that this is what we’re paid for.

Most of the time our gestalt is pretty good. Really well people look well, really sick peopl look sick, fire is hot, ice is cold etc…

The inbetweeners where you’re not sure are the tricky ones. Sometimes we use tests to help us. If I suspect you have a broken wrist I can do an x-ray and then I have a useful answer to my question.

Unfortunately when it comes to syncope we have no such tool or test.  Mainly we have gestalt – we talk to them, hear they’re story and we make an educated guess clinical judgement.

Understandably when we make our guess (ahem…) we err on the side of caution, often this means admitting the patient. When it turns out that the patient is fine then you could say that it was a waste of both hospital and patient time.

The more complex critical appraisal bit

this study from Edinburgh is part of yet another “rule” to help us leave our brains at the door work out who is safe in syncope. In particular it wanted to examine the utility of troponins in making a decision on people with syncope.

I like it’s conclusion “estimation of troponin I provides little additional benefit to the presenting ECG in identifying patients with syncope due to AMI” however I’m not so keen on how they got there.

They approached patients who presented with syncope to assess eligibility, a quarter were deemed ineligible (by reasonable standards) and 2 thirds of these got enrolled.

My biggest problem comes here: for a study that wants to assess troponins they only managed to do a trop on half of the patients enrolled. I can’t work out if people got troponins at the physicians discretion (in which case it is a selected group of presumably higher risk syncope patients) or if the troponin was part of the protocol (in which case they did a really bad job of following the protocol).

This kind of invalidates a lot of the conclusions if I’ve read it right. How can you make statements about a general syncope population and troponins when only half of the patients got the test? There may have been reasons why these patients didn’t get the test done which makes them different.

Anyhow.

Of all who had the trop done there were 4 MIs – all of which had ECG changes. Which is reassuring. If they have a normal ECG then you really shouldn’t have to worry about an MI.

They also found that of the 256 analysed patients, 9% had a bad outcome (their definition of this was slightly more dubious)

And as in lots of other studies, lots of the raised troponins were for non-MI reasons. Hopefully by now we’re learning this – don’t automatically think MI when you see a raised troponin.

Given the constant battle that we find in the NHS to get a patient admitted the trop is sometimes a useful stick to beat the admitting team with – which is truly terrible practice I know but sometimes you just have to do what you need to get the patient admitted – despite its lack of utility.

I send the vast majority of my syncope patients home after an ECG and history and examination (you know that thing you do where you touch the patient and use the stethoscope thingy – it’s largely useless but it looks good…) In general when there’s something bad happening it’s fairly obvious from the start.

My problem with decision instruments for things like syncope is that it’s too complex a problem to simplify into 4 clinical variables. For ankles this works, for syncope I’m not sure.


About

November 2017
M T W T F S S
« Sep    
 12345
6789101112
13141516171819
20212223242526
27282930