How to Do a Damn Journal Club

Brandon and Steph’s Note: As pharmacists, we need to able to efficiently assess a journal article for its usefulness. In your practice life, prescribers will ask your opinion on X therapy given the new data that was just published. Or you’ll get a patient that REALLY wants to take scorpion venom to treat their cancer (and they have an article to show how great it works). Or you’ll have someone that wants to know if the Pfizer COVID vaccine is safe for their pregnant wife (and if so, does the trimester matter?).

And, sure, you can temporarily get off the hook with the standard, “I’ll look into it and let you know” type of answer. But then you gotta actually be able to read those articles and formulate a reasonable, educated opinion. Also, no pressure, but your answer will impact a patient’s life.

(Image)

We are going to try really hard not to spend too much time here in this post. (Image)

Sometimes, as both a student and a real-life practicing pharmacist, you’ll have to lead a journal club, which brings us to the topic of today’s post.

There isn’t necessarily a “right” way to do a journal club, but there is definitely a “wrong” way. This article will teach you the ins and outs of conducting your own damn journal club (don’t worry, we’ll try our best to stay off of our soapbox). Let’s dive in.

Journal Club Preparation: What to do Before Your Presentation

Before starting, take a moment to consider WHY you’re doing a journal club in the first place.

We’ll give you a hint — the answer isn’t, “Because my preceptor told me to,” or “To satisfy a requirement for my P4 portfolio.”

There are several reasons to do a journal club. Broadly speaking, they apply whether you are a student, resident, or a seasoned practitioner:

  1. To develop your ability to sift through minefields of clinical data and formulate an evidence-based plan for your patients

  2. To improve your ability to communicate that information to others

When you do it right, everyone in your audience benefits from your journal club. Get a group of clinicians together to discuss an article, and they all leave the discussion as slightly better clinicians. By sharing knowledge and insight, your efforts can compound to the whole group. And, through thoughtful discussion, you’re all better equipped to tackle the difficult drug-related questions that come at you from prescribers and patients.

Second, put some thought into WHAT topic to present.

Depending on if you are a student or not (and depending on the preferences of your preceptor), you often will get some say in deciding which journal article to present. So…what do you choose? Obviously, your starting criteria should be the topic of your rotation. If you’re on an infectious disease rotation, it’s probably best not to suggest an article about the latest findings in heart failure.

We encourage you to find a topic that you’re genuinely interested in learning more about. Don’t just pick blindly from the latest NEJM. Yes, we know it’s “high quality” and has the article sections nicely organized to match your presentation template. That’s a perk, for sure. But to do a journal club right, you need to learn more about the underlying disease state (more on this in a bit). You may as well make it fun and beneficial for yourself by picking something that piques your interest or even something that you know is a weak spot for you.

Unless your preceptor tells you otherwise, feel free to peruse different journals, even those that have lower impact factor scores (gasp!). Believe it or not, you can learn a lot from a BAD journal article — in fact, one might argue you can learn MORE from a bad article than a great one. The trick is to recognize the things that make the study lower quality.

Hone this. You need it. (Image)

Hone this. You need it. (Image)

Just to be clear, we’re not saying that you should always choose articles of poor quality. It’s just that, in the future, you may be managing patient populations that don’t make the front page of NEJM. Or you might specialize in an area (e.g., oncology) where drugs can get an accelerated approval based on their Phase 2 results. You won’t always have data from the gold standard Phase 3 Randomized Controlled Trial. You’ll do yourself a favor by getting comfortable evaluating less-than-ideal publications (the better to hone your BS Detector!).

Third, consider WHO your audience is.

Actually, this is a good rule for every presentation. Put yourself in the shoes of your audience. What do they need to know? Why is this important to them? The information that a pharmacist finds relevant is very different than that of a Nurse Practitioner. Make sure that you’re presenting information that your audience will find useful.

For example, let’s say you are a student taking Brandon’s APPE rotation (you lucky duck, you). And let’s say you’re presenting a journal club to the pharmacy team about a new drug that was just approved for some type of cancer. What does the pharmacy team need to know? Well, for starters, we need to know the obvious stuff like side effects and drug interactions. But we also need to know where this new drug fits into the current treatment paradigm. Is it only to be used as the third line of therapy (or later)? Is it only for patients with metastatic disease? Where does the new drug fit (and how much does it cost) compared to current treatment options? These are the type of questions that the pharmacy team will be fielding from patients and prescribers, so this is the information that you’ll want to present.

Fourth, include handouts.

This is a relatively simple one, but even if you are using PowerPoint to present your journal club, be sure to include handouts. At a minimum, your handout NEEDS to include the paper you are presenting. Yes, you should have already emailed the paper to your audience (at least a few days in advance, but ideally a week). But most folks in your audience will not remember to bring this to your presentation, and if they don’t have laptops to pull up the article in the moment, it could make it more difficult to have a robust discussion. It REALLY helps to facilitate discussion when everyone can look at the paper together. Your handout can also include your summary — which brings us to our next section.

Should You Use a Journal Club Template?

Many schools (or even specific rotations) will provide you with a journal club template. You can find a bunch right here if you don’t have one. Journal club templates can make for nice handouts during your presentation. They can help guide your thinking when you are reviewing the article, and they are a good place to drop important reference points.

So, yes, use a journal club template if that’s your thing. Go nuts.

How it feels when you’re in the journal club audience and you realize the presenter is just going to read directly from their handout. (Image)

How it feels when you’re in the journal club audience and you realize the presenter is just going to read directly from their handout. (Image)

But (you knew this was coming, didn’t you?)…

Please, for the love of all that is holy, don’t treat the template like a scavenger hunt.

There is a tendency, especially when rotation gets crazy and you’re pressed for time, to plug the information into each box and then read it aloud like you’re checking off boxes.

“Here’s the inclusion criteria. The exclusion. The primary endpoint p-value…”

Obviously, that information needs to go there. It serves as a quick summary and can be useful for your audience to reference. But it’s almost too easy to transcribe data on your template and move on to the next thing. You’ll pat yourself on the back because this section is “done.” The problem is, you haven’t actually EVALUATED a damn thing.

You have to think critically about the information you’re plugging into your template. This critical thinking and analysis leads to the bulk of the discussion of your journal club. For every section of your presentation template, consider the following:

  • Do you understand why a particular methodology was selected? What are the advantages and disadvantages?

  • Do you agree, or would you have designed/presented the information differently?

  • What is clinically significant versus statistically significant?

  • How can this information be applied to your patient or patient population?

  • How does this therapy fit with other available management strategies?

  • What additional information would you like to have to make a decision about X therapy?

Ultimately, whether you use a template or not, there are a couple of questions you’re seeking to answer with a journal club:

  1. If you had to repeat this study, what would you have done differently? This is a great question to “reverse engineer” your brain into thinking critically about the information in the journal. When you think about blowing the study up and starting from scratch, it’s sometimes easier to find holes in the methodology.

  2. How will you adopt this new information into your practice? Will you use this new drug to treat your patients? If so, which ones (and under what circumstances)? Remember, the end goal of a journal club is to help you make informed decisions about patient care. Make sure you can answer this question.

What Makes a Good Journal Club?

Yes, that’s right. This is exactly what a body of literature looks likes. Image

Yes, that’s right. This is exactly what a body of literature looks likes. Image

Let’s start with a dark truth — A journal article (by itself) isn’t all that useful to a clinician. This makes sense when you think about it.

The practice of medicine is shaped by the sum total of all clinical studies that have ever been recorded. Taking out one study and analyzing it is like removing one picture from one of those flipbook animation thingys. You really need the total package to see the entire picture.

By carefully selecting your patient population and your comparator arms, you can make an isolated clinical trial look really good (more on this in a bit…). You can achieve your primary intervention with p-values and confidence intervals that are borderline magical.

But…that’s just one study. And without a larger body of evidence to support it, it doesn’t help us all that much when we’re treating actual patients.

So, what should you do?

Simply put, when leading a journal club, it is YOUR job to put the article in its proper context. That means you need to know the background and practice guidelines for the therapeutic area. That means you need to know what typical patients with the disease state look like. That means you need to know what other treatment options are available. In short, you need to do a shit-ton of homework.

Does this make journal club a lot harder? Absolutely. But will it teach you more, and be more useful to your audience? You betcha.

Another useful tip is to make it a discussion rather than a presentation. Even though you are leading the journal club, the “club” indicates that there should be some back and forth between multiple members. It’s OK to ask questions! It’s usually best if you “present” the background and study design first, and then in the results (and especially the discussion section) you can open the conversation up a bit. You’ll find the discussion will take some interesting turns (and you’ll likely learn a lot) in the process.

We mentioned this above, but it’s worth repeating. The crucial questions you need to answer in order to have a good journal club are:

  • How (if at all) will you adopt this study into your pharmacy practice?

  • If you had to repeat this study, what would you have done differently?

Journal Club Tips

If you’re anything like us, you might now find yourself thinking, “Yeah, yeah, but what do I actually TALK ABOUT during my journal club?

And hey, we wouldn’t be doing our tl;dr jobs if we didn’t give you guidance here.

In terms of what to talk about, remember that most of the available journal club templates we mentioned will inherently provide some semblance of a structure to follow. If you’re not using a template (or you just REALLY want to avoid “reading” your template), a good rule framework you can follow for your journal club presentation is: Who, What, Where, When, Why.

  • Who published the article? Who are the patients in the trial?

  • What was the intervention?

  • Where did the study take place?

  • When/how long were the intervention and follow-up periods?

  • Why was this study done?

This list is deceptively simple. But being able to answer these questions (especially the “why” part) will put you on your way to journal club success. Try to put yourself in the shoes of your audience. What do they need to know? What is most important to them? A group of clinicians, for example, will likely pay special attention to the inclusion and exclusion criteria of the study. They need to know what the average patient in the study looked like since they’ll ultimately be deciding whether or not to give the therapy to an actual patient one day.

Spotting the “Hacks” in Clinical Studies

Who, what, where, when, why is a great starting point for your presentation, but it won’t give you all the ammunition you need for a fruitful discussion. To get that, you need to talk about the strengths and limitations of the study. These are the extra things that add weight to (or subtract weight from) the conclusion the authors reached.

While this is by no means an exhaustive list, here are some common “hacks” you may encounter. They are tricks that authors can use to make a study look a lot better than it really is.

The Role of Funding in a Clinical Trial

Almost every study you read will be funded by a drug company. Is that a source of bias? Of course. But think about it — Who else is going to pay for an expensive clinical trial? Who else has a vested financial interest in the drug?

So…yes, you can list it as a limitation…but it doesn’t completely invalidate the study. It just means that you need to read with an eagle eye and your BS radar tuned up.

Plus, there are different levels of drug company sponsorship. Just because Pharma sponsors a study doesn’t mean they have a hand in every single aspect of the publication. Sometimes they just provide the drug and basic funding. Sometimes they’re involved in study design. Sometimes they’re doing data collection, analysis, and interpretation. Sometimes they hire a medical writer to write the manuscript.

This is basically the Abstract of a Clinical Study

This is basically the Abstract of a Clinical Study

Here’s a good rule of thumb. The less involved the drug company is, the better. And the more transparent they are about their level of involvement, the better.

In our (anecdotal) experience, it’s becoming increasingly common to see Pharma act like a “helicopter parent” and micromanage every aspect of the publication. That isn’t necessarily a bad thing, as long as they’re upfront and transparent about it.

But on that note, keep this in mind…

The study you’re reading is basically a sales brochure for the drug company (even if it’s published in a high-impact journal like NEJM).

We’ve written before about the dangers of only reading a study’s abstract. We’ve argued that the abstract is like a magazine cover for the drug company — nothing more than a sexy headline to convey their message.

But it actually extends deeper…

The entire paper is a sales brochure (especially when the drug company is involved in every step of the data collection, analysis, interpretation, and presentation). Think about it.

The drug company gets to pick WHICH graphs they show you. Just because you’re looking at a super-separated Kaplan-Meier curve doesn’t mean the endpoint is relevant.

And pay attention to the axes. Did they scale the y-axis differently just to make those curves separate more dramatically?

Our goal here isn’t to make you a nihilist or to say that everything you read is a lie.

We’re just saying it’s better to approach a study thinking, “The benefit of this thing is probably overstated, and the risk is probably understated. Let the data prove me wrong!”

To summarize, drug companies will fund almost every clinical trial you read. Understand the inherent risk of bias that this presents, read the study with an appropriate grain of salt, and continue with your assessment.

Let’s point out some other ways to make a study look better than it really is.

Cherry-Picking the Study Population

One way to mask the frequency of side effects of a drug is to over-represent younger and healthier people in the study population. This is especially common in oncology trials, where younger patients can better handle the toxic treatments (and they tend to live longer to boot!). In order to extrapolate the results of a study to real patients, the patients in the study have to be similar.

Will the results of a study done in mostly 45-year-old Caucasian males apply to a 78-year-old Vietnamese female? Are you telling me that NONE of the patients in this American cardiovascular disease study are obese? Or have diabetes? This drug just got approved for 5th-line multiple myeloma…how is it that the treatment population in the study all had an ECOG performance status of 0-1?

These are the types of hard questions you should ask yourself during a journal club. It’s a crucial way to determine whether or not the results will apply to patients in the real world.

Cherry-Picking the Endpoints

Do you want to know one of the easiest ways to manipulate the data to make a drug look fantastic? Move the “goalposts” of what you’re measuring. Clinical trial endpoints are so important that we dedicated an entire post to them. Read that to get an in-depth guide.

Take a close look at both the primary and secondary endpoints of the study. Are they relevant to the disease state? Did the authors create a composite endpoint or use a surrogate endpoint? If they did, was that appropriate? Is the chosen noninferiority margin clinically significant? Is there an endpoint that would have made sense to study for the disease state or treatment, and it’s not there? Why??

Cherry-Picking the Control Group

This is straight out of the “Used Car Salesman’s Playbook.” If you do a study comparing your new drug to an ineffective (or overly toxic) therapy, your new drug will look amazing. But is that reflective of how patients are treated in the real world?

Look, we know glatiramer isn’t all that effective for multiple sclerosis. So please don’t try to sell us the “dramatically improved disability progression numbers” with your new, fancy disease-modifying therapy when we wouldn’t have used glatiramer in that patient population in the first place. Capisce?

Displaying Impressive Graphs that Don’t Actually Matter

Don’t be fooled by the pretty colors. Make sure the charts you’re reviewing actually SAY something. (Image)

Don’t be fooled by the pretty colors. Make sure the charts you’re reviewing actually SAY something. (Image)

We hinted at this one above, but it’s worth mentioning again. Don’t be swayed by the glossy, pretty pictures in a study without first stopping to smell the roses. Does the endpoint depicted in the graph actually matter? Are the axes evenly distributed to avoid exaggerated results?

We won’t call out anyone specifically, but we’ve recently seen a publication on prostate cancer that displayed a very prominent Kaplan-Meier curve depicting the difference in “Pain-Free Progression” between the control and treatment groups. To the best of our knowledge (and practice experience), that is a 100% made-up endpoint. It was a fantastic-looking graph, tho.

Using Percentage Changes for Small Sample Sizes (and Vice-Versa)

Whoa, this intervention reduced the primary endpoint by over 30%! Oh, BTW, it was only in 40 people…

I mean, c’mon, is that really generalizable to an entire patient population?

Stealing another page from the “Used Car Salesman’s Playbook,” positive results can be presented as percentages (rather than absolute numbers) to make the outcome seem bigger. A “30% reduction in the outcome” is a sexier finding than “The outcome occurred in 7/10 people.”

You’ll see the opposite as well. A negative side effect will be presented as a number (rather than a percentage) to make it seem smaller. It’s a lot easier to report that 8 people compared to 4 people experienced a major bleed. Otherwise, you’d have to say that there was a 100% increase in major bleeds.

Another way this trick presents itself is when the authors present the median even in cases where the mean would be more appropriate.

This game of basic number manipulation isn’t necessarily false…it’s just sneaky. It’s a way of exploiting our human nature. It’s the same thing the car dealership does when they list that vehicle with a sale price of $24,999. Your lizard brain tells you you’re getting a steal for less than $25K - when in reality, we’re talking a dollar difference.

Using Relative Risk Instead of Absolute Risk

This is a different, but similar, shade of the example above. If Treatment A carries a risk of 15% and the new, fancy Treatment B carries a risk of 10%, the relative risk reduction is 33% for Treatment B. Yeehaw! That’s pretty impressive, yay? A third less!

But let’s be real here. It’s really only a 5% reduction in absolute risk. Is a risk of 10% for Treatment B really what we’re after? And is that change from Treatment A worth the potential increased cost or toxicity?

FYI, most clinical studies (and especially the media) report relative risk reductions (and increases) instead of absolute risk. They do this because it’s more attention-grabbing. Unfortunately, you will have to go through the article and calculate absolute risk by hand. Only then can you determine if the difference is clinically relevant.

Using “Marketing” Terms Instead of “Statistical” Terms

There’s nothing more annoying than reading that a result “approaches significance.” This made up term is a load of marketing bull hockey. The point of a p-value is to delineate statistical significance or not. There is no in-between. So please don’t accept that a p-value of 0.054 is “approaching significance” if the threshold is set at < 0.05.

Other common marketing terms include descriptions of “substantial” or “profound” effects. If you see these terms, remind yourself that they are another way of saying, “We didn’t get the result we were hoping for…but we really felt like we should have!”

Or, put it this way…

If the result was powerful enough to achieve statistical significance, the authors would have used statistical terms…wouldn’t they? If you see a bunch of superlative adjectives flying around, your BS Detector should go off.

Powering for Non-Inferiority, Then Doing a Superiority Analysis

You technically can design a non-inferiority trial and then analyze it as a superiority trial. But there are pretty strict dos and don’ts when doing this (read a nice review article here). And either way, the practice is generally (at least in the circles I walk in) not welcomed with open arms. The FDA also has some guidance on it, and they're definitely not in love with the practice.

It’s looked at as a form of data dredging (see the next tip). You’re changing the analysis of your data after the fact in a way that your trial wasn’t designed for. And you’re doing it in a way that clearly benefits you financially.

When you’re reading the latest sexy study, watch out for this trick. It’s more common than you think. Honestly, that's pretty surprising given the FDA guidance on the subject.

Data Dredging

Aw, we can be happy that this squirrel found a nut (or 2)! But we shouldn’t be quite so elated when it seems like study authors magically find nuts… (Image)

Aw, we can be happy that this squirrel found a nut (or 2)! But we shouldn’t be quite so elated when it seems like study authors magically find nuts… (Image)

You know the saying, “Even a blind squirrel finds a nut every once in a while?” This is an apt description of data dredging. Basically, if you analyze the data from enough angles, you’re probably going to find something significant. But, if you've read our article on the p-value, you know how fickle it is. Does massaging the data until you have a statistically significant result make it clinically significant?

Broadly speaking, if you see endpoints that are "post-hoc," that means they were added on AFTER the fact. This is a huge red flag for data dredging. It's always best to determine your endpoints up front.

Another general rule -- The more secondary endpoints there are, the more likely there was dredging. Obviously, this isn’t ALWAYS true, just something to be aware of.

Always be on the lookout for the term “Exploratory Analysis” because that literally means p-hacking. A neat trick you can do is to look the trial up on clinicaltrials.gov and see what they were supposed to be studying. If the endpoints they publish in the paper aren’t what’s listed on that site, you can pretty much assume they went fishing for significant data.

Overly Complicated Statistics/Trial Design

Think back to your high school chemistry class. The whole point behind the scientific method is to come up with reliable, reproducible results. When a study’s design is overly complicated, whether due to treatment groups, schemes, or statistical analyses, it makes it that much more difficult to replicate the results. It also makes it more difficult for the peer review community to call foul.

If you reach Crazy Charlie level trying to prepare your journal club, the study design is probably too complicated. (Image)

If you reach Crazy Charlie level trying to prepare your journal club, the study design is probably too complicated. (Image)

Always evaluate whether or not there was a better way to answer the question at hand. Or, if the trial protocol is realistic in the real world. Would you have designed the study differently? Used different patients or endpoints to make this more applicable to the population in question?


Using One-Tailed Instead of Two-Tailed Analyses

We can already see your eyes starting to glaze over, so let's start by checking out this article. They do a great job at spelling out the differences between one and two-tailed tests.

A two-tailed test assumes that your intervention could have an impact in either direction. Meaning, the new drug you are testing could improve the disease...or it could make it worse. Sure, you may SUSPECT that the drug is going to help your patients -- that's why you're giving it to them. But really, you have no way of knowing if the drug will do more benefit than harm. The two-tailed analysis allows you to test both directions of your intervention.

A one-tailed test, by contrast, assumes that there is only one possible direction of benefit. Your intervention couldn't possibly be worse than the control group...it MUST be better than (or equal to).

Here's the big takeaway. It is much easier to achieve statistical significance with a one-tailed test...but it is almost ALWAYS appropriate to conduct a two-tailed analysis. There are very few instances when a one-tailed test is OK.

If you see a one-tailed test when you're working on a journal club, look at it with a VERY scrupulous set of eyes. It's most likely inappropriate.

tl;dr’s Journal Club Summary

Basically, we’re asking you to think critically. For the love of Pete, if you’re going to use a template, use it as a guide rather than a script. Hone your BS-ometer. Encourage discussion in your group session, and make it an environment where it’s safe to share thoughts, even if others disagree or they’re incorrect.

There are more gray areas in pharmacy and medicine than you would ever guess. Pharmacy school will throw treatment guidelines at you for every disease state you can think of, and you have to know how to navigate them.

But if you really want to be a successful pharmacist, you have to learn how to evaluate and discuss the evidence. The more articles you read (and the more journal clubs you participate in), the better you will become.

Lastly, have fun with your journal clubs! (Really! They should be!)

Further Reading

We've given a high-level summary here, but if you want to learn more about journal clubs, biostats, and literature evaluation, check out these other posts on our site. They'll cover a lot of ground that we weren't able to get to in this post.