There is nothing worse that putting a whole lot of effort into collecting PRO data only to find out that the data you have is well, no good. We’ve identified the five mistakes that occur most often when collecting patient outcomes, and what you can do to fix it.
Here are the top 5 most common mistakes we see in patient-reported outcomes programs.
Keeping only the overall survey score
The mistake: The patient fills out the survey, you score the survey, you write down and store the overall score, and throw away the survey. Don’t throw away the survey! You need to keep not just the score, but the patient’s response to each individual question; otherwise you risk having a large set of data that is completely worthless from a research standpoint. Even if you don’t think that you want to do research, there is nothing worse that changing your mind and then realizing you can’t do anything with the data you worked so hard to collect.
The remedy: Each patient outcome survey tool has some sort of scoring system, most of which have multiple categories in addition to the overall score. Usually there are only one or two parts of the score that you really care about and want to see. Here’s what you do: Record the scores that you want to see in some format that works for you and your practice, then take the actual survey and store it somewhere safe. That way, if you ever need the individual answers, you’ll have them available.
Selecting more than one answer
The mistake: A patient fills out the survey on a paper form, and rather than choosing the one answer that best describes their situation, they select three. Then the patient writes a narrative on the side to explain that it is “hard to choose just one because it fluctuates throughout the day and I just took pain medicine 30 minutes ago so now it’s not really bad but oh boy this morning it was just terrible.” Sadly surveys like this are deemed invalid and must be thrown away.
The remedy: Make it very clear to the patients that they can choose one answer and one answer only. Often, both a verbal and a written explanation are necessary. Also, put a comments section at the end of the form so that patients have an outlet for their comments or explanations, if they so desire. The best possible solution for this is, in fact, to use an electronic system that only allows patients to select one answer.
Asking too much
The mistake: There are so many things that would be interesting to hear about from your patients, which leads to a tendency to tack on a bunch of additional questions to the actual patient reported outcome (PRO) tool. Even though the questions may be interesting, it is highly unlikely that they will be provide much value — especially from a research standpoint, because your additional questions are not validated.
Also, keep in mind that there is a fine line between how much we want, and how much patients are willing to give. Remember that you need your patients to fill out these surveys at multiple intervals, and you don’t want them to get survey fatigue, which is more likely to happen when the surveys are super long.
The remedy: Don’t let perfect be the enemy of good. If you are considering adding an additional question, think long and hard before doing so. Ask yourself: Is the question I want to add covered in some aspect by the actual PRO tools already contained in the survey? And, how will I use the data generated from this question for either internal process improvement or to better care for my patients?
Asking too little
The mistake: Choosing a PRO tool because it is short. Often times, people switch from the PRO tool they originally intended to use for a particular patient population to the shortest PRO tool possible. That’s understandable, especially if you make patients fill out the survey during their clinic visit. The KOOS + SF-12 combine for 54 questions, whereas the Oxford Knee Score + EQ-5D adds up to just 17. That’s a big difference! We often see people switch to shorter PRO tools because it is less time consuming. However, it’s important to first evaluate the impact the switch will have on the quality of the data in the long run.
The remedy: Don’t compromise your goals just to make the data collection process easier or faster. PRO collection should not be something that you simply add on to the rest of the forms that patients are already filling out. It needs to be handled separately as a program that provides a critical part of patient care. Design an outcome collection program that will standardize the data collection process, factor in appropriately the time that it will take for patients to fill out the surveys, and adjust clinic flow accordingly.
Not collecting any outcome data at all
The mistake: By far, the biggest mistake that you can make is not collecting anything. Many people have intentions to collect patient outcomes — but intentions won’t cut it. You either have outcome data, or you don’t. In healthcare today, your value is defined by the relationship between your patient outcomes and the cost of service. If you don’t have outcome data, you can’t quantify your value. Having outcome data is not just a bonus — it is must.
The remedy: Don’t over think it! If you are not currently collecting outcome data, start doing so NOW. We often see people in “analysis paralysis” limbo — what PRO tools should we use? When should we administer them? Do we need IRB approval? How are we going to design our research? The reality is, all of these questions can be answered later. You don’t need to have a grandiose plan for how you are going to use the data when you start. In fact, it is often easier to answer these questions when you actually have some data to work with.
What you can’t go back and do retrospectively, however, is collect PRO data. It’s time sensitive. Once the window has passed, it’s gone forever. Think of it like auto insurance. There are a million policies out there, and you can spend months looking into each one to find out which is best for you. But if you get pulled over or end up in a wreck and you have nothing, all that research you did and your best intentions to get a good policy are not going to help. You either have it, or you don’t. There is no good excuse not be collecting outcome data.