Step 3: Identify Eligible Patients
Description
Identifying eligible patients ideally involves two components: (1) conducting electronic record reviews, and (2) reviewing returned SEA forms to identify additional ineligibles and opt-outs. This latter step is optional, but we recommend doing it, if possible, as it helps ensure that the central entity is only targeting patients who are truly eligible and not ineligibles who might become frustrated or annoyed by receiving screening materials.
Reviews of electronic records (claims, billing, and medical records) are conducted at several points during the intervention. An initial review is conducted to identify patients of participating practices who appear to be eligible for receiving intervention material. Eligibility criteria are:
- Being ages 50 through 79.
- Being a current patient of the practice (having had at least one visit to the practice within the previous 2 years).
- Being of average risk for colorectal cancer (no previous diagnosis of CRC, colorectal polyps, or inflammatory bowel disease; and no family history of CRC diagnosed before age 60).
- Having a complete mailing address on file.
- Not being up to date in CRC screening according to guidelines (not having had a colonoscopy within the previous 10 years, a flexible sigmoidoscopy or double contrast barium enema within the previous 5 years, or a fecal occult blood test (FOBT), FIT, or similar stool test within the previous year).
Patients deemed ineligible after the initial record review are excluded from receiving the intervention and do not receive any of the patient mailings. At this time, an SEA form could be mailed to patients to further clean the data of ineligibles and patients who do not want to receive any more screening information. Patients deemed eligible after this initial review are entered into a master patient database used to track:
- Patient response to the intervention.
- Screening results for those subsequently choosing to be screened.
- Patient notification of screening results.
- Followup to screenings.
In addition, the master patient database can be used to store eligibility information from the SEA. It also can include demographic information about each patient gleaned from medical records and responses to the SEA form (described in greater detail below under mailing intervention material to patients). In addition, the master patient database can include information about the primary care practice with which a patient is affiliated.
Followup electronic reviews are conducted prior to various subsequent steps in the intervention that require identifying patient response to the intervention material or results of screenings. We describe these in more detail below under tracking patient response and screening results.
Our Experience
We identified patients in intervention and control practices as being potentially eligible for average-risk CRC screening (and hence eligible for the intervention) by an initial electronic record review using expanded eligibility criteria. These criteria excluded patients insured through one of the Blue Cross Blue Shield products conducting a CRC screening program of their own. Patients deemed eligible by record review were then entered into a master patient database and were tracked through the various stages of the intervention screening process, including tracking responses to the SEA mailing.
As previously described, we collected the required electronic data for the record review from LVPHO, MATLV, LVPG, and LVHN. The electronic data systems included:
- Claims submitted for payment to LVPHO by providers at participating practices for health care provided to patients insured through an LVPHO insurance product.
- Bills for health care provided at participating practices to patients insured through any private insurance product (LVPHO or non-LVPHO), any public program (Medicare or Medicaid), or self-insured (self-pay); and patient electronic medical records at practices with EMR systems accessible to project staff. Acting in a HIPAA-compliant manner,vi LVHN study personnel merged each entity's data to develop a central database for this study. This database contained information on all participating practice patients identified as potentially eligible for the intervention.
Prior to the reminder mailing (mailing 3) and at several other points during the intervention period, we conducted electronic record reviews to assess evidence of CRC screening. We also conducted several additional reviews in conjunction with the intervention assessment to identify patients who were screened or had a followup test performed.
Lessons Learned
During both the pilot and full intervention, we learned several key factors that affect the ability to conduct electronic record reviews successfully. These factors include the following:
- We received OMB clearance close to the end of calendar year 2008, which affected the timing of our intervention. We had to delay data extraction until the source data systems could update and stabilize in response to end-of-year open enrollment-associated changes in patient insurance selection and provider/practice changes resulting from insurance plan switching. Scheduling record reviews during or in the weeks following open enrollment periods is not a good idea because records are not up to date and data systems and personnel have other priorities.
- Staff responsible for managing and extracting data from the source data systems did not always have the necessary experience or expertise to easily produce the lists of patients eligible for the intervention. In addition, these systems were not always set up to produce such lists or to provide the kinds of data we needed (e.g., identifying which primary care practice a patient was affiliated with or finding evidence of prior CRC screening in the records) and required special programming. We needed to learn how to (1) accommodate competing demands on electronic data systems and data management and programming staff, (2) supplement existing report generation programs with revised programs that met the requirements of the data extraction, (3) respond to HIPAA concerns of data management staff who had not been part of early decisionmaking and HIPAA reviews, and (4) accommodate and overcome missing data and data ambiguities, especially in electronic medical records. We did not experience these problems to this extent with the pilot site. These lessons learned were not unique to this study. Other recent studies (Roth, et al., 2009; West, et al., 2009) note that obtaining information from electronic health records is complicated and difficult. In particular to colorectal cancer, Roth, et al. (2009) found that quality indicator data are especially problematic.
- Even after we received source data for the record review, data cleaning required a significant amount of additional manual effort (i.e., more time and internal resources than originally anticipated). For example, the format of patient names received from some of the source systems was not compatible with producing mailing labels, requiring us to manually reformat the names. All of these issues caused further delay to the start of our intervention. In response to these unanticipated delays, we decided to stagger the start of the intervention. We sent an initial wave of mailings to patients of practices for which data cleaning and formatting issues were resolved and waited to send a second wave of mailings to patients of the remaining practices for which data problems remained. We describe this situation in greater detail in the following section.
Step 4: Mail Screening Information and Materials
Description
This step consists of several mailings to patients:
- Screening Eligibility Assessment (SEA) brief survey form.
- Invitation to be screened.
- Followup reminder.
- Optional second reminder.
If the intervention is to be implemented in a community that has a significant population who speaks a non-English language, we highly recommend that all mailed material be sent in both English and the other language.
All patients in the intervention practices who are identified as being potentially eligible by the initial electronic record review are sent a first mailing consisting of a letter from the primary care practice with which they are affiliated. This mailing explains the importance of CRC screening, informs them of the practice's participation in a screening program, and requests that they complete the enclosed form (the SEA form). The SEA form asks patients to indicate whether they consider themselves to be ineligible for the intervention (i.e., self-identify as ineligible by checking one or more listed reasons that would make them ineligible). It also asks them to provide additional demographic information about themselves not otherwise ascertainable through the available electronic records. Such information includes race/ethnicity, preferred language, marital status, educational level, and perceived health status.
The SEA form also provides a check box for patients to use to indicate that they do not want to participate in the intervention (i.e., that they do not want to receive subsequent information about CRC screening or this screening program). There is also a telephone number provided (that goes to the central entity) for opting out of the screening program if a patient does not want to respond by the SEA form.
As previously noted, use of the SEA form is optional, although recommended. Its primary intent is to identify patients who appear to be eligible based on the initial record review but who are, in fact, ineligible. This step is included to help compensate for the current state of most electronic records. The information contained in them that is required to determine eligibility is often outdated, incomplete, inaccurate, or missing.
The SEA form gives patients an opportunity to self-identify as ineligible and to report the reason for their ineligibility. The central entity conducting the intervention can then change the status of such a patient in the master patient database from eligible to ineligible and indicate the reason. The central entity can also inform the practice with which such a patient is affiliated of the need to update its records on this patient or to further confirm with the patient the validity of the self-identified ineligibility. Such information can also be used by practices to initiate a conversation with patients who identify themselves as ineligible due to above-average risk to be more diligent about receiving CRC testing than for average-risk patients.
If the central entity is willing to accept the results of the initial electronic record review as determinative, it can omit the SEA form. The central entity may have great confidence in the validity of the electronic records from which it determines eligibility or this entity may decide that the cost and effort involved in mailing and processing the SEA form outweighs the benefit gained since it can be a time-intensive effort. We recommend against this but leave it up to the central entity adopting the intervention.
A secondary intent of the SEA form is to collect demographic data not otherwise available. The central entity can use this information to assess screening response rate and results by demographic group and plan subsequent educational or outreach programs targeted specifically to appropriate groups. Again, if the central entity does not want to include this function in the intervention, it can omit the step.
All patients deemed to still be eligible following the SEA mailing are sent an invitation to be screened. If the central entity omits the SEA mailing, the invitation becomes the first rather than the second mailing.
The invitation mailing consists of:
- A letter from the patient's primary care practice inviting the patient to be screened.
- Educational material regarding CRC and screening (a brochure that describes the benefits of CRC screening and the alternative screening modalities consistent with the 2008 Multi-Society and USPSTF guidelines).
- A list of colonoscopy providers to whom practice clinicians refer.
- Either a stool test kit or a request card that the patient can mail back to request a kit.
- A self-addressed stamped envelope for returning either the kit or the card.
If the invitation mailing is the first mailing (i.e., the SEA mailing was omitted), the invitation letter also provides the introduction to the screening program from the initial letter accompanying the SEA form. The invitation letter is also tailored to whether a stool test kit or a request for a kit is enclosed.
If the central entity conducting the intervention is using stool test kit request cards for all or some of the patients in the screening program, it periodically mails kits to patients requesting them. It also updates its master patient database to indicate which patients have requested a kit.
In conjunction with tracking patient response to the screening invitation (described below), the central entity sends one or more reminders to patients in the database who remain eligible for screening (i.e., those for whom no disqualifying information has been received and no evidence of screening is found). The central entity can decide to stop after only one reminder or to send one or more subsequent reminders. This decision will be based on considerations of cost, expected increased response, and patient reaction.
Our Experience
All material sent to patients was bilingual, in English and Spanish. Approximately 11 percent of the population of the Lehigh Valley is Hispanic or Latino, many of whom speak Spanish as their primary language. We felt it was important to provide written material in a language they would be more comfortable with than English. Further, recent studies have found that minorities are more likely to be screened when the information is presented in a manner they can read and understand (Carcaise-Edinboro, 2008; Natale-Pereira, et al., 2008; Nguyen, et al., 2010).
Mailing 1: Screening Eligibility Assessment
We sent an introductory mailing to all patients in the intervention practices who were identified as being potentially eligible by the electronic record review. This mailing contained a letter from their primary care practice regarding the importance of CRC screening and an SEA form. This form asked them to verify their eligibility and to provide additional demographic information about themselves not otherwise ascertainable through the available electronic records.
We did not send this mailing to patients in the control practices. Receiving information and recommendations for CRC screening from a health care provider has been found to be a predictor of screening. Thus, we considered this mailing to be a component of the intervention and did not want to expose members of the control group to it lest it stimulate a portion of them to be screened and thus confound the intervention assessment.
The SEA form provided an opportunity for patients to self-identify as ineligible and indicate a reason for their ineligibility. We coded such patients as ineligible in our master patient database and included their reason. The SEA form also provided a mechanism for patients to inform us that they did not want to receive any further information about CRC screening, which we considered to be an indication of their desire to opt out of the intervention. We coded them as opt-outs in the master patient database. We did not require patients to opt in because this is a population-based public health intervention designed to reach out to the eligible unscreened population. We included in the intervention all patients who did not return an SEA form, as we had no indication from them that they were either ineligible or wanted to opt out.
We initially planned to postpone the first intervention mailing until the completion of the initial electronic record review for all participating practices even though we were experiencing delays in obtaining and cleaning source data. We were able to resolve these issues for 8 of the 15 intervention practices by the end of April but continued to experience problems with the remaining 7 practices. At that point, we decided to conduct the mailings in two successive waves. Wave 1 consisted of the eight practices for which the electronic record review was complete; Wave 2—mailed several weeks latervii—consisted of the remaining seven practices.
Mailing 2: Invitation To Be Screened
We sent the invitation to be screened to all patients who did not opt out or indicate they were ineligible for the intervention through the SEA form, or who did not have their first mailing returned as undeliverable. As noted, due to financial constraints on the clinical laboratory supplying stool test kits for this study and the lower than expected return-for-processing rate for kits mailed in the intervention pilot, we modified our protocol for this mailing. Instead of mailing kits to all recipients as we did for the pilot, we required recipients to return a card—enclosed with the mailing—requesting a kit.
We wanted to test the effect of the change in protocol and thus devised a small substudy embedded within the main study. This substudy allowed us to estimate the relative effectiveness of two different methods of providing stool test kits to patients (enclosed with the invitation vs. sent in response to a request). We sought and received agreement from the lab to supply up to 550 kits for direct mailing to a subset of patients. We selected the two largest practices in Wave 2 for this substudy (N = 2,036 and 373 eligible patients, respectively). We separately randomly selected a proportional subset of each practice's patients to be sent the stool kit rather than the card (totaling 500 patients across the two practices). By restricting this substudy to only two practices and then separately randomizing patients, we were able to estimate the effect of the two versions of the protocol controlling for the effect of practice setting.
We used two versions of the letter from the practices for this substudy. The letter for those patients receiving the request card protocol explained the procedure to follow for mailing back the card to request a kit. The letter for those patients receiving the kit directly explained the procedure for using and returning the kit to their physician's practice. To avoid confusion among patients, we sent all patients of these practices who shared a common address (i.e., patients within the same household) the same version of the invitation mailing. Thus, if one eligible patient member of a household was randomly selected to receive the stool test kit directly, then all eligible patient members of that household received that version of the mailing. This procedure increased the number of patients receiving the kit directly to 540. Our assessment of the intervention now includes a comparison of screening rates for the two different protocols for distributing stool test kits within the two selected intervention practices.
We sent this second mailing in two waves to match the waves of the first mailing in order to give all recipients adequate time to complete and return the SEA form before we prepared mailing lists for the second mailing. Wave 1, completed during mid-July, consisted of patients sent a stool test kit request card. We split Wave 2 between those receiving the stool test kit request card (Wave 2a, sent in early August) and those receiving the stool test kit directly (Wave 2b, sent at the end of July).
Mailing 3: First Reminder Mailing
After we conducted an electronic record review to assess evidence of screening, patients with no evidence of screening were sent a reminder letter by study personnel on behalf of the patient's practice. Our goal was to further stimulate and encourage patient screening (Hudson, et al., 2007). Mailing three was completed at the end of August.
Mailing 4: Second Reminder Mailing
We considered sending a second reminder mailing (fourth mailing) to further encourage and increase CRC screening; however, we decided not to do so in response to requests from several practices who had received complaints from patients about study contact. The practices informed us that patients complained about receiving mailings for screenings that they either did not want or did not need. Apparently, the data in the electronic records were more incomplete or faulty than we had anticipated and did not adequately identify ineligible patients.
Our protocol specified that we would send intervention material to all patients not opting out who otherwise appeared to be eligible; thus, we inadvertently sent material to some ineligible patients who did not return their SEA forms notifying us of their ineligibility. Similarly, although we provided an opportunity and a means for patients to opt out, many who did not want to participate apparently did not inform us of their desire to opt out. Given that we did not want to alienate either the practices or their patients, we decided to cancel the second reminder.
We believe this issue could have been minimized had the electronic records contained more accurate information. A recent study by Schneider, et al. (2008) found that administrative data often underestimate receipt of CRC screening.
Lessons Learned
We learned several key lessons from the patient mailings, including the following:
- The patient mailings were time and labor intensive. For the full intervention, we anticipated this situation and extended mailing timelines and assigned additional staff to accommodate requirements for printing, assembling, and mailing all the materials. Even taking this into account, we experienced delays in sending out the mailing materials. Part of this problem was based on the time required to ensure that each letter uniquely identified a specific patient, which then was matched to the mailing label. Depending on the size of the patient population and the staff resources available from the central entity, we recommend using a separate contractor who specializes in large-scale mailings to send and track the patient mailings. HIPAA issues should be considered when making this decision.
- Patients do not always return requested information (e.g., SEA form) in a timely manner. For the full intervention, we revised the patient mailing letters so that they more explicitly requested that patients complete and return the SEA form within a specific time period. We also allowed more time for receiving completed forms.
- The SEA form was very useful for identifying patients who were not eligible and who did not want to participate. Some patients used the SEA form to "vocalize" their desire to not continue receiving any additional screening information. We believe that if they had not had this opt-out modality, then the practices would have received more phone calls from frustrated patients.
- Patients can complete an SEA form indicating they are ineligible or want to opt out and then choose to get screened. It is important to consider whether you want to consider such patients to be eligible and whether you want to consider their screens as "successes," as this affects your tracking records and methods.
Step 5: Track Patient Screening and Results
Description
After a reasonable period of time and periodically thereafter, the central entity conducts a followup electronic record review to look for evidence of screening (in particular, a stool test or colonoscopy), reviews reports received from the clinical lab processing stool test kits for evidence of screening and results, and updates its master patient database accordingly. Results from this tracking are used for sending reminders and preparing feedback reports to the practices.
The intervention also uses two other tracking mechanisms. First, practices track the screening of their own patients. They can track screening either through the screening tracking spreadsheet provided to them at the academic detailing sessions or through internal tracking mechanisms they already have in place. They then periodically generate reports that the central entity can use to update the master patient database. Second, especially for practices without electronic medical records, central entity staff may request access to select patient charts in order to conduct audits looking for evidence of screening and possible needed followup. Such chart audits would only be performed when electronic evidence of screening and followup is inconclusive and only if they would not violate HIPAA requirements.
Our Experience
Using the electronic data systems, we tracked evidence of screening and followup at several intervals during the study. This allowed us to monitor both the number of patients being screened and the methods by which they were being screened. We updated the master patient database with tracking information as we received it.
We tried to collect screening tracking spreadsheets and internal tracking mechanisms from each practice; however, we found that the practices did not use the screening tracking sheet and they were not able to share their own internal tracking mechanisms. Therefore, we were not able to use this information for tracking patients.
We did conduct chart audits on a sample of charts from the intervention and control practices. The protocol we used for the chart audits follows:
- For intervention practices:
- In practices with approximately 50 or fewer patients included in the study (where we could only access limited electronic data for identifying intervention-eligible patients), we conducted chart audits for all patients included in the study.
- In the remaining practices, each with substantially more than 50 intervention-eligible patients included in the study, we set a target of auditing a 6 percent sample of study patient charts. To facilitate reaching this target within each of these practices, we drew a 12 percent random sample of their study patient charts. Starting at the top of each randomized list, we conducted audits of available usable charts until we reached a quota of approximately 6 percent for each practice (upwardly rounded to the next whole number; e.g., 6% of 520 = 31.2, upwardly rounded to 32).
- For control practices:
- In practices with approximately 50 or fewer patients included in the study (where we could only access limited electronic data for identifying intervention-eligible patients), we conducted chart audits for all patients included in the study.
- In the remaining practices, each with substantially more than 50 intervention-eligible patients included in the study, we set a target of auditing an 8 percent sample of study patient charts. To facilitate reaching this target within each of these practices, we drew a 16 percent random sample of their study patient charts. Starting at the top of each randomized list, we conducted audits of available usable charts until we reached a quota of approximately 8 percent for each practice (upwardly rounded to the next whole number; e.g., 8% of 540 = 43.2, upwardly rounded to 44).
We used a higher target for control practices than for intervention practices to partly compensate for having less complete data for control practices.
Lessons Learned
We learned several valuable lessons regarding patient tracking, including the following:
- From the pilot, we learned that in addition to updating the master patient database with information related to eligibility for the intervention and each of the various mailings, patient response, and screening results and followup, it would be useful to track patients through the flow of intervention steps. In addition, information to support the tracking of patients was not always available through the source data systems we were using. For the full intervention, we developed an intervention flowchart for internal tracking purposes that accounted for each patient being passed from one intervention step to the next. We also developed a Screening Tracking Sheet for use by practices without an existing internal tracking system. However, we experienced difficulties using these tools. The intervention flowchart was not always compatible with the electronic data systems and therefore the information available in our master patient database, so it was difficult to populate this flow chart. As noted, we also found that practices did not use the Screening Tracking Sheet. We sent the Screening Tracking Sheet to the office manager of each practice. We feel that it may also be helpful to have a clinical "champion" at each practice to help ensure that clinical tracking tools are used.
- During the pilot, we learned that the lab did not have an established process to review its electronic records for evidence of screening. For the full intervention, we requested that the lab establish such a process. The lab used this process for the full intervention without any problems.
- We learned that it was time and labor intensive to code and capture SEA results for the master patient database. For the full intervention, we developed a codebook and coding instructions for the SEA data and created an electronic database for them that could be subsequently merged into the master patient database. This data coding and tracking was still time consuming. In the future, we recommend exploring the use of a scannable SEA form. This will help minimize manual data entry and increase the speed, and perhaps accuracy, of the data entry.
- We also learned that it can be very difficult to determine from the electronic records whether a complete diagnostic evaluation (CDE) was performed on patients with positive stool tests. In fact, we had to manually review the charts of nearly all patients with positive stool tests to uncover evidence of CDE. Colonoscopy can be performed as either a screening test or as diagnostic followup for an abnormal finding from a stool test or other screening procedure. But in many cases, a screening colonoscopy and a diagnostic colonoscopy were not easily distinguishable in the electronic record. This ambiguity required a manual review of the record and chart notes to resolve. When even a manual review of the patient record could not provide unambiguous resolution, we assumed that if a patient had both a stool test and a colonoscopy during the intervention observation period, the colonoscopy would be a CDE rather than a screen.
Step 6: Provide Feedback to Practices
Description
The central entity notifies participating practices of normal (negative) and positive (abnormal) stool test screenings and coaches them on notifying patients about screening results and how to follow up on them. The central entity also provides a form to practices for tracking and documenting followup CDEs for patients with positive stool test results.
Practices are expected to respond to a negative stool test by notifying the patient and informing the patient that the guidelines recommend that they be rescreened every 12 months. Practices are expected to respond to a positive stool test by recommending a CDE for the patient. The CDE feedback form identifies patients in need of a CDE and reminds providers of recommended CDE procedures. It also requests that providers document (1) advice to patients to have a CDE and what type of modality was recommended, (2) date the CDE was scheduled and completed, and (3) results of the CDE as well as any additional comments. This information can then become part of the patient's record.
Our Experience
We sent the appropriate feedback forms to the practices at several interim points during the assessment period of the study. We did this on an ongoing basis, based on when new screening results were uncovered:
- We sent the stool test positive form to each practice that had patients with a positive stool test result. The form reminded the practice that these patients should have a followup test and what types of tests were recommended.
- We sent the stool test negative form to each practice that had patients with a negative stool test result. The form reminded the practice that these patients should be notified of their negative result and that they should be screened again in 1 year.
- We sent the CDE feedback form to practices with patients who had a positive stool test result (along with form 1). This form was a tracking tool, where clinicians could track the appropriate followup steps. The CDE feedback form asked that the clinician return the completed form to the study. However, we did not receive any completed forms. We contacted the practice manager at several points regarding collecting the CDE feedback forms without success.
Lessons Learned
It was difficult to get clinicians to complete and return the CDE feedback form. As noted, we recommend that the central entity have a clinical "champion" at each practice, in addition to the office manager. If each practice has a clinical point of contact, he or she may be better suited to encourage fellow clinicians to use these clinical tracking tools.
vi Medical practices must comply with the provisions of the Health Insurance Portability and Accountability Act regarding patient records.
vii The Wave 2 mailing was subsequently further delayed in the LVHN mailroom due to temporarily misplacing it prior to actually putting it into the Postal Service mail.