Archives for April 2011

Reading Research: Diagnostics

Why do you do what you do when you do it?

Initially, we do it because we’re taught to. It comes down to us from on high, from instructors or textbooks — when you see x, do y — and we’re simply expected to learn it, memorize it, and recite it back. Then, once in the field, to follow it mechanically.

But before long, if we’re to become more than just medical Roombas, we really have to start asking Why. It’s not because we’re difficult children, or to satiate our curiosity. It’s because even at the best of times, the rules can’t address every situation. And in order to make intelligent, appropriate decisions when the circumstances aren’t clear and simple, we need to understand the underlying principles behind the rules we learn. We need to understand both the potential value and potential harm of the interventions we provide. We need to understand the meaning and importance of specific assessment findings. We need to be students of reality, and the human body, rather than of arbitrary rules.

In order to do all this, we need to be able to read research. Medical research is where these answers come from; it’s where we learn what works, and how well, and what importance to attach to the things we see. To read research, though, we need to understand the basic statistical methods they use.

Statistics is a big, big topic, and I don’t have a strong background in it, so if you really want to dive into this, take a class. The analytical and regressive methods used to crunch the data in a study are something we won’t touch here. But we do need to understand a few basic terms, because they’re central to how the results of a study are presented — in other words, if you’re looking for answers, this is the language in which they’re written. So although the idea of a post about statistics may sound as appealing as a brochure on anal ointment, bear with me; this won’t be too painful, and it’s information you can use over and over and take to your grave. Right now, let’s talk about numbers used to describe accuracy of diagnostic signs.

 

Sensitivity and Specificity

Take a certain test. It could be anything. A clinical finding. A laboratory test. Even a suggestive element from a patient history. Call it Test X.

Let’s say that this test is linked to a certain patient condition, Condition Y. Something bad. Something we want to find. In fact, Condition Y is the whole reason we’re looking at Test X.

What would make Test X a good test for Condition Y? Well, when the test says “You have Condition Y!”, then you should really have it. And if it says “You don’t have Condition Y!”, then indeed, you shouldn’t have it. It doesn’t have to be perfect. But it should be pretty good — otherwise, what’s the point in using the test? If it doesn’t tell us something we didn’t know before, we might as well ignore it.

When the test says “You have Condition Y,” and you really do have it, we’ll call that a true positive. When the test says, “You don’t have Condition Y,” and indeed you don’t have it, we’ll call that a true negative. Those are the findings we want; we want the test to tell us the truth, so we can base our treatments and decisions on reality.

On the other hand, when the test says, “You have Condition Y,” but you DON’T have it — in other words, an error, the test got it wrong — we call that a false positive. We thought you were positive, but whoops, you’re actually fine. And when the test says, “You don’t have Condition Y,” but it turns out that you do, we’ll call that a false negative, or a miss. The test cleared you, but it missed the badness; you actually do have the condition. These are the screw-ups.

How many true positives and true negatives does our test yield, versus how many false positives and false negatives? This determines how good our test is, how faithful to reality. The perfect test would have 100% true results, either positive or negative depending on the patient’s condition: if you have Condition Y, the test is positive, and if you don’t have Condition Y, the test is negative. There would be zero false positives or false negatives.

The worst possible test would have about 50% true and 50% false results. There would be no correlation between the test results and having the condition. In fact, it would be pointless to call this a test for Condition Y; we might as well flip a coin and call that Test X, because it would be just as useful.

Okay, so how do we determine the accuracy of a test? We take a bunch of patients, some of whom have Condition Y, and some of whom don’t, and we run them through Test X like sand through a sieve. Then we see which patients the test flagged, and see how accurate it was. (Obviously, we’ll need a way of knowing for sure who has Condition Y; this is usually done by a separate, “gold standard” test with known reliability. Correlation between Test X and the gold standard is what we’re examining here. Why not just use gold standards tests on all patients? Generally these are difficult, invasive, time-consuming, and expensive procedures — not appropriate for everyone, and certainly not of much use in the field.)

We’ll come up with a couple of figures. One is the test’s sensitivity. This describes how well our test picked up Condition Y; how alert was it, how often did it pick up what we’re looking for? If you have Condition Y, how likely is the test to say you have it? How many sick patients slipped past? If our test has 100% sensitivity, it will have zero false negatives; it will never miss, will never fail to flag a patient with Condition Y. A test with 0% sensitivity is blind; it will never notice Condition Y at all.

The other statistic is the test’s specificity. This describes how selective our test is, how cautiously it sounds its alarm. If you don’t have Condition Y, how likely is the test to say you don’t have it? Will it ever be fooled, and wrongly think that you do? A test with 100% specificity will never produce a false positive; if it shouts positive, it’s never wrong. On the other hand, a test with 0% specificity will never be right; it’s the boy who cried wolf.

Together, sensitivity and specificity describe a test’s accuracy. Intuitively, you can see how the two parameters might often work against each other; we can make a test that is extremely “paranoid,” and will catch almost everything — high sensitivity — but will also flag a great many false positives — low specificity. (Heck, we could just make a flashing red light that said “POSITIVE!” every single time, and we’d never miss anyone — of course, it’d have so many false positives that it’d be useless.) Conversely, we can make a test which is extremely judicious and selective, and when it says “positive,” we can trust that it’s probably right — high specificity — but it’ll miss a lot of true positives — low sensitivity.

Ideally, we’d like a test with high sensitivity and high specificity. But when that’s not possible, then at least we need to understand how to interpret the results.

For instance, a test with high sensitivity is very good for ruling a condition out. Because it almost always catches Condition Y, if the test says “nope, I just don’t see it here,” then that’s very trustworthy; if the patient did have it, the test probably would’ve caught it. Think SnOut: a test with good Sensitivity that comes back negative rules a condition Out.

Example: pinpoint pupils. For the patient with altered mental status, this is a very sensitive indicator of opiate use; almost everyone with a large amount of opiates in their system will present with small pupils. However, it’s not very specific, because many people will have small pupils without using narcotics (for instance, due to bright lighting). So if you don’t see pinpoint pupils, that finding rules out opiate overdose with fairly good reliability.

On the other hand, a test with high specificity is very good for ruling a condition in. Because it’s almost never wrong, if it says you do have Condition Y, you can take that to the bank. Think SpIn: a test with good Specificity that comes back positive rules a condition In. (Thanks to Medscape for these mnemonics.)

Example: a pulsating abdominal mass is an extremely specific finding in abdominal aortic aneurysm. Very few other conditions can cause such a pulsating mass, so if you find one, you can pretty reliably say that the patient has a AAA. However, many AAA patients will not have such a mass, so this is not very sensitive. But if you do find a pulsating mass, this rules AAA in fairly well.

 

Warning: Scary Statistics Ahead

Okay, that wasn’t so bad, was it?

Here’s where things get a little weirder. If you’re barely hanging on to the thread so far, you have permission to stop reading now.

Sensitivity and specificity are the most commonly used parameters describing the accuracy of a test. They’re properties of the test itself, so you can hang those numbers on it and they won’t change on you.

However, anyone who’s studied Bayesian statistics will understand that the true accuracy of our test is not only a factor of the test, but also depends on the prevalence of Condition Y in the population. If Condition Y is exceptionally rare in the patient group we’re looking at, then even if Test X is very specific, it will produce a large number of false positives. Conversely, if Condition Y is exceptionally common, then even if Test X is very sensitive, it will produce a large number of false negatives.

The reasons for all of this are complex. (For some additional reading, see here, and here.) But the general gist is this: if Condition Y is very unlikely to be present (either because it’s generally uncommon, such as scurvy; or because it’s an improbable diagnosis for the individual patient, such as an acute MI in an 8-year-old), then even if your test “rules it in,” it will still be unlikely. The positive test made it more likely, but it was so improbable to begin with, the odds didn’t change very much. And if Condition Y is very probable (such as a healthy heart in an asymptomatic teenager), then even if your test “rules it out,” the odds still support its presence.

What this all means is that in order to answer our real questions, we need another measure. The positive predictive value (PPV) and negative predictive value (NPV) are the answer, and really, these figures are what we’re after. The PPV answers: given a positive test result, how likely is the patient to have the condition? The NPV answers: given a negative test result, how likely is the patient to lack the condition? In other words, in a real patient, how likely is the test result to be correct?

The trouble is that PPV and NPV aren’t just characteristics of the test; as we saw above, they also depend on the prevalence of the condition, or the “pre-test probability.” What this means is that although the study you’re reading may report predictive values, they are not necessarily applicable to your patient. They’re only applicable to the patient population that was studied. Now, if your patient is similar to that population — in other words, has about the same pre-test probability of the condition as they did — then the predictive values should be correct. If not… not so much.

So do we have any more tricks? We have one more: likelihood ratios. Likelihood ratios factor out pre-test probability, producing a simple ratio that describes how much the test changed the probability. For instance, suppose we have a patient who we judge has a 10% probability of having Condition Y. We apply a test with a positive likelihood ratio of 5, and it comes up positive. What’s that mean? The math is a little bit roundabout, because we need to convert probability (a percentage of positive outcome out of all possible outcomes) into odds (a fraction of positive outcome over negative outcome): 10% is the same as 1:9 odds. 1/9 times 5 is 5/9, and if we convert that back to a percentage (positive outcome over total outcomes, or 5/14), we have  the result: about 36%. The patient now has a 36% chance of having Condition Y. Conversely, suppose it came up negative, and the test had a negative likelihood ratio of .1. The post-test probability (by the same calculation) is now only around 1%.

It’s a simple device that would be far more intuitive without the odds vs. probability conversion, but suffice to say that a likelihood ratio of 1 (1:1) changes nothing, higher than 1 is a positive test (1–3 slightly so, around 5–10 is a useful test, and over 10 is highly suggestive), and less than 1 is negative (1–.5 just barely, around .5–.1 decently, and under .1 is strongly negative.) Try plugging numbers into this calculator to experiment — or drag around the sliders in the Diagnostics section at The NNT. The only bad news is that you still need to know the pre-test probability, but the good news is that you can come up with your own estimate, rather than having an inappropriate one already included in the predictive values.

How to come up with pre-test probabilities? Well, research-derived statistics do exist for various patient groups… but realistically, in the field, you will need to wing it. Taking into account the whole clinical picture, including history, physical exam, and complaints, how high-risk would you deem this patient? You don’t need to be exact, but you should be able to come up with a rough idea. Now, apply your test, and consider the results — about how likely is the condition now? If at any point, you have enough certainty (either positive or negative) to make a decision, then do it; there’s no point in tacking on endless tests if they won’t change your treatment.

Anybody still breathing? We’ll talk about odds ratios, NNT, and other intervention-related numbers another time.

[Edit 5/15/13: the follow-up post on outcome metrics is posted at Lit Whisperers, our sister blog]

People Care

This is the best book any EMT can own.

I say that as someone with a strong clinical focus, and a passion for improving and elevating the educational standards in our field. I am an avowed nerd, and drip rates, T-wave inversions, and case reviews are what keep me awake at night. Yet I consistently recommend this little “warm and fuzzy” booklet to new and experienced EMS professionals alike, and would place it before any electrocardiographic tome or trauma manual. It should be on the shelf of everybody who works on an ambulance, period.

Thom Dick is a longtime paramedic, as well as an author and speaker on the EMS circuit, and several years ago he collected many of his favorite topics into People Care: Career-Friendly Practices for Professional Caregivers. This is a paperback book of less than 100 pages, written in a personal and accessible style, and it compellingly lays out Thom’s idea of what this job is all about.

It’s not about job skills, or tips for getting through your shift, although some of these are offered. Rather, it’s really about how to understand your job — what lens you should use to view this whole EMS business. This may not seem especially important; after all, no matter what rose-tinted goggles you buckle on, you’re still going to end up bringing the same patients to the same places in the same ways (and making the same dollars for doing it). True enough. But what about you? Will you be happy doing it? Passionate? Driven? If you start out as those things, will you stay that way, or will you join the ranks of the angry, the apathetic, the disillusioned?

There are a lot of things wrong with this job. Depending on who you ask, and what their priorities are, you might get different lists. But certainly, EMS is an industry with flaws, and the men and women working to improve it should be seen as heroes. But even if things do get better, what will we do in the mean time? Hell, even after they get better, will you be happy? The goggles you wear can turn the best circumstances bad if that’s your attitude.

Thom’s work is the prescription. When we talked about Joe Delaney, I was channeling People Care; Thom’s kind of EMT is someone who views their business as helping the people who call for us, and who asks for no more than that (or less). It’s not a complicated outlook, but I think it is utterly, absolutely essential.

A lot of things are wrong with this job, but if you have the right lifeline, you can survive all of it and more. Thom’s been teaching these ideas for years now, and you might be surprised at how many of your colleagues and coworkers know him personally or have heard him speak. But if, like me, you haven’t been so fortunate, buy his book. Read it. Recommend it. Loan it out — it’s been out of print for years now. And see if it doesn’t bring some of your problems into perspective.

(I am indebted to Peter Canning for originally introducing me to this book, via his blog, Street Watch. Also of note: Steve Whitehead at The EMT Spot is an old coworker of Thom’s, and his site discusses many of these topics in a similar spirit.)

Polypharmacy in the Elderly

A tremendously valuable Educational Pearl from the wonderful UMEM mailing list, courtesy of Amal Mattu, emergency physician extraordinaire.

We already know that polypharmacy is a big issue in the elderly, but here are a few key points to keep in mind:

  1. Adverse drug effects are responsible for 11% of ED visits in the elderly.
  2. Almost 50% of all adverse drug effects in the elderly are accounted for by only 3 drug classes:
    a. oral anticoagulant or antiplatelet agents
    b. antidiabetic agents
    c. agents with narrow therapeutic index (e.g. digoxin and phenytoin)
  3. 1/3 of all adverse-effect-induced ED visits are accounted for by warfarin, insulin, and digoxin.
  4. Up to 20% of new prescriptions given to elderly ED patients represents a potential drug interaction.

The bottom line here is very simple–scrutinize that medication list and any new prescriptions in the elderly patient!

References
Samaras N, Chevalley T, Samaras D, et al. Older patients in the emergency department: a review. Ann Emerg Med 2010;56:261-269.
[Source]

The value of this is inestimable. We know that polypharmacy is a big deal, but it’s such a big deal that it can be hard to shrink down the problem enough to really consider it when an elderly patient presents themselves. Could their problem involve something on this med list that’s as long as your arm? Certainly, but where to start?

Start with the above. Over half of your problems will involve anticoagulants, antidiabetics, and easily misdosed drugs. Those are the usual suspects; they should jump out at you from the list. But we can do even better, because nearly half of those will involve one of three particular serial offenders: insulin, warfarin (aka Coumadin), and digoxin. And let’s add a fourth one: any new or recently modified prescriptions. If any of these are present in a patient with an appropriate complaint or presentation, it should be strongly considered as being part of the problem if not the actual smoking gun.

Insulin is easy, especially if you have access to finger-stick glucometry; diabetic emergencies (especially hypoglycemia), including iatrogenic ones, are so common that you might as well assume anybody with an altered mental status is diabetic — even if they aren’t. Definitive treatment is obviously oral glucose or IV dextrose, as appropriate.

Warfarin is still an extremely common anticoagulant, although a couple new alternatives are now available, and it requires close and frequent monitoring of levels in order to maintain a therapeutic dose. (The usual standard is a measure of clotting speed called INR; the test can be performed in the lab, but nowadays can also be done right at the bedside.) Various medication interactions and even dietary changes can shift this range. Overdose is associated with, no surprise, bleeding — in all forms. If necessary, supertherapeutic warfarin levels can be antagonized with Vitamin K or IV clotting factors.

Digoxin is seen less today than in yesteryear, but once upon a time everybody and their mother was on “dig,” and it’s still used with some regularity. Its most common application is for rate control of atrial fibrillation patients. Although other antiarrhythmics are now more common, dig has the peculiar magic of reducing cardiac rate while actually increasing contractility (negative chronotropic but positive inotropic effects). However, its therapeutic range is narrow and is easily shifted by pharmacological, renal, and other issues; as a result, dig toxicity is famously common. Overdose symptoms include GI problems and neurological complaints such as visual disturbances and changes in mood or energy level. It can also present prominently on the ECG, with the most classic sign being degradation of AV conduction with an increase in atrial and ventricular ectopy — for instance, slow A-fib or atrial tachycardia, a third-degree AV block, and a junctional escape with PVCs. (As a result, the atrial fibrillation patient controlled on dig may present with an unexpected “regularization” of his pulses, due to a junctional or ventricular escape taking over from the usual A-fib. This is a clue even the BLS guys can catch.) Treatment is supportive for arrhythmias and heart failure; severe cases can be managed with Digoxin Immune Fab (aka Digibind or Digifab).

Drug Families: Steroids and Antibiotics

When things go wrong
as they usually do —
Inflammation!

Inflammation

There are a lot of bad things that can happen to your body. Homeostasis, as we like to call it, is that smooth state when all your bits and pieces behave just as they ought to; and “bad things” are anything that knock this out of whack.

And what’s funny is that, no matter what that insult is, you can pretty much count on the body to respond with inflammation. Other, more specific things too, but inflammation will be there. It’s physiological duct tape: your basic, one-size-fits-all solution for any physical calamity.

Inflammation is caused by a complex blend of chemical mediators, but physically, the result is usually some combination of five classic signs.

  • Heat [calor]
  • Redness [rubor]
  • Swelling [tumor]
  • Pain [dolor]
  • And sometimes included, a general loss of function [functio laesa]

Try the Latin if you’re trying to impress someone at the bar.

Suppose you fall and bang your elbow, causing minor soft tissue damage. The body reacts immediately by activating a local inflammatory cascade, whereby numerous processes swing into gear. Local vasodilation occurs, bringing more blood into the area, to support faster healing; this increased bloodflow (hyperemia) produces the redness and warmth associated with injury. Vascular permeability is also increased, allowing fluid to leak into the surrounding tissue, which results in edematous swelling; this not only conveys healing factors into the damaged area, it also physically limits movement around the affected joint by “self-splinting.” Other chemical mediators increase your local sensitivity to pain, which further discourages you from movement; a decrease in the joint’s function is the result.

All of which is part of the inflammatory package. Neat!

The inflammatory cascade in soft tissue damage

Now suppose you catch a cold. Viral particles enter your mouth or nose, whether by direct contact or by inhaling them as an aerosol, and lodge somewhere in your oronasopharynx. Our response: inflammation! Your immune system recognizes the intrusion and responds with an influx of infection-fighting white blood cells, such as neutrophils and monocytes, along with the same cocktail of general inflammatory mediators (bradykinin, cytokines, etc.) that we saw with the injured elbow. The result? Swelling; excess mucus production; pain (as in sore throat); a general discomfort and sense of crumminess; and in more systemic cases, a fever to make the environment less hospitable for the virus.

It’s all the same story. When things go wrong, the body responds in various ways, but it’s almost always accompanied by some sort of inflammatory response to facilitate and assist the repairs.

Sometimes, however, this process becomes maladaptive. Whether it’s an immune response to infection or a local response to injury, short, appropriate, and effective inflammatory activity is a valuable part of our defenses — but if becomes too severe, lasts too long, or serves no purpose, then it can become part of the problem. For our bumped elbow, inflammation will promote healing, but if after a few days we find that the area is still swollen, this is no longer valuable; it’s impeding our ability to use the joint, which is what we need to do in order to circulate blood and encourage further healing. Our body’s response was excessive. So we apply ice to vasoconstrict the area, elevate the extremity, and take anti-inflammatory drugs, all to reduce that local edema and tamp down our inflammatory freak-out.

Key players of inflammation in sepsis

Numerous illnesses and injuries exhibit this sort of excessive, harmful inflammatory response. For example:

  • Traumatic brain injury is deadly because swelling within the cranium has nowhere to go, resulting in a self-feeding cycle of increased pressure and increased damage.
  • Sepsis occurs when an infection becomes widespread enough that it causes a system-wide inflammatory response, resulting in organ damage and vascular disruption — this cascade is self-feeding and can quickly become more harmful than the infection itself, even causing death long after the initial infection has been eradicated.
  • COPD and asthma are caused, in part, by inflammation of the lower airway (due to prior damage or various dysfunctions).
  • Shock kills early by hypoperfusion, but if that is survived, it kills later by an uncontrolled inflammatory cascade resulting from that hypoperfusion. If not managed early, this cascade can continue to spread independently of the original shock state.
  • The entire spectrum of autoimmune diseases is characterized by an inappropriate immune response to the body’s own tissues.
  • Allergic reactions, including lethal anaphylaxis, are hypersensitive immune responses to benign foreign agents like dust or foods.

To make a long story short, sometimes, inflammation sucks.

 

Steroids

Steroids are modern medicine’s answer. Steroids are a large class of molecule, including the anabolic steroids that “pump you up” and sex steroids like testosterone and estrogen, but what we’re interested in are glucocorticoids (sometimes called corticosteroids, which is actually a broader category, but the terms are often confused). Glucocorticoids are interesting hormones with numerous effects; as a matter of fact, they’re part of the “fight or flight” stress response we talked about before. (Put simply, catecholamines like adrenaline give you a boost to help deal with danger right now; glucocorticoids, on the other hand, give you a slightly more delayed “second wind,” so you’ll still have some juice a few hours later.) And fighting infections and healing injuries is a real waste of energy when we’re running from wild tigers. The result? Glucocorticoids inhibit the inflammatory response.

They can therefore play a role in the management of all the problems we just mentioned. Maintenance-type inhalers for asthma and COPD are often steroids. Anti-allergy nasal sprays too. Appropriate steroid use can be complex, because we must be careful not to over-inhibit our inflammatory system; for instance, although they would seem like an obvious answer to sepsis, their use for those patients is unclear and has long been controversial. Or how about using steroids to treat epiglottitis, an infectious swelling of the epiglottis that can obstruct the airway? We would expect the steroids to combat the swelling, but also to impair our ability to fight the underlying infection. So finding the balance can be difficult.

Corticosteroids can be administered locally, when a local effect is desired, such as via metered-dose inhaler for asthma. Or they can be administered globally for systemic conditions, such as by IV or oral routes for autoimmune conditions.

 

Antibiotics

Of course, sometimes the body is fighting for a reason.

As we’ve seen, the body responds with inflammation to a wide range of insults, but one of the most common is infection. And in the many cases of infection when our primary goal is simply to eradicate the source, pharmacological support can be beneficial.

Antibiotics are generally well-recognized as agents that kill bacteria. The terminology has become somewhat clouded nowadays, as the word “antibiotics” is sometimes used to strictly mean anti-bacterial agents, and sometimes to mean all anti-microbials, including anti-fungals and anti-virals. But the general idea of immunosupport is the same.

These agents generally work in one of two ways: either by directly killing the microbe, or by impeding its ability to replicate. They’re tuned so that they affect the bad guys without harming (not too badly anyway) our body’s own cells.

It’s therefore natural to think of antibiotic therapy as the natural opposite of steroids, and this has some truth to it. In the case of infection — which, remember, is not the only cause of inflammation — steroids do inhibit the immune response. But bear in mind that antibiotics do not, as a general rule, actually support or promote the body’s inflammatory response; rather, they work independently by attacking the infection directly along their own pathways. The result is that some pathologies (such as the contentious cases of sepsis and epiglottitis) may respond both to steroids — to manage the excessive inflammatory response — and antibiotics — to help eliminate the source infection.

 

Examples

Once again, remember that common drug suffixes are usually only applicable to generic drug names. Trade names tend to be unique.

Steroids

  • Drugs ending in -one (prednisone, hydrocortisone, clocortolone, etc.)
  • Drugs ending in -ide (fluocinonide, budesonide, desonide, etc.)
  • Drugs with pred in the name (prednisolone, loteprednol, prednicarbate, etc.)
  • Drugs with cort in the name (fluocortin, Cyclocort, Entocort)

Antimicrobials

  • Drugs beginning with ceph- or cef- are antibiotics of the cephalosporin type (cefixime, cephalexin, cefepime, etc)
  • Drugs ending in -illin are antibiotics of the pencillin type (penicillin, methicillin, nafcillin, etc.)
  • Drugs ending in -cycline are antibiotics of the tetracycline type (doxycycline, methacycline, etc.); not to be confused with the -tyline of tricyclic antidepressants.
  • Drugs ending in -azole are generally from a large family that can have antibiotic, anti-fungal, and anthelmintic (anti-parasitic) effects (metronidazole, fluconazole, miconazole, etc.). However, this does not include the -prazole drugs (omeprazole, pantoprazole, and others) which are actually proton pump inhibitors, with no antimicrobial effects.
  • Drugs ending in -floxacin are antibiotics of the quinolone type (levofloxacin, ciprofloxacin, etc.).
  • Drugs ending in -mycin are antibiotics of the macrolide type (azithromycin, erythromycin, etc.)
  • Drugs beginning with sulf- are antibiotics of the sulphonamide type (sulfamethoxazole, etc.)
  • Drugs ending with -adine are antivirals of the adamantane type (amantadine, rimantadine)
  • Drugs containing vir are generally antivirals (acyclovir, oseltamivir, ribavirin, efavirenz), including antiretrovirals for HIV treatment
  • Drugs ending with -vudine are antivirals (lamivudine, telbivudine, etc.)

More Drug Families: Stimulants and Depressants; ACE Inhibitors and ARBs; Anticoagulants and Antiplatelets