Using Assessment Data to Affect Curricular Change and Increase Licensure Exam Scores

Creating a learning environment that is most conducive to producing positive student outcomes is the core mission of all educators. Regardless of the level of education in which we work, we use a combination of similar components to create these learning environments—content, instructional methods, and assessments. While the goal is to continually strive to improve student performance, it is not always the motivation for what and how we teach our students. For example, the curricula that institutions implement are often products of tradition or are dictated by the need to satisfy accreditation requirements. Furthermore, instructors tend to use instructional methods and assessment techniques learned during their time as students, from a mentor, or from trial and error in the classroom. Unfortunately, the question that remains unanswered at many institutions is how effective are these curricula and instructional methods at truly helping to improve student outcomes?

While educators can measure student outcomes by performance on their course exams, these evaluations are generally confined to simply evaluating in that classroom individually. To conduct a true review of course and curricular effectiveness, a larger scale evaluation of student success beyond the course level must be conducted. Throughout all levels of the educational process, educators are tasked with appropriately preparing students for the next step in their academic or professional careers. When students are successful at the next level in their careers, this is validation that courses are truly successful. For example, at Oklahoma State University College of Osteopathic Medicine, we use the national level one board exam as a true evaluation of effectiveness of the first two years of our program’s curriculum. Students’ success of this high-stakes exam provides a clear indication that courses are achieving the goal of positively impacting student outcomes.

There is much to learn about our curricula beyond the correlation between having a high board passage rate and a successful program. Through the use of computer-based assessment programs such as ExamSoft, the very assessments we use to determine if students have learned the required content in our courses can also be used to evaluate our curriculum. The data that institutions receive on internal assessments can be compared with the results on licensing exams (as well as other standardized exams) to create a process that will positively affect curricular changes.

CREATING A PROCESS

As with any program that is designed to evaluate and improve curricula, it is vital to first establish a sound process prior to implementation. This research process begins with (1) identifying objectives for the study, (2) defining roles for the participants in the research, (3) identifying which data will be collected, (4) creating a process for data review, and (5) implementing curricular changes.

Identifying Objectives. For the sake of this curriculum improvement process, the objective of the study is to identify which areas in the curriculum were most effective at preparing students for their level one board exams. Although this is an objective focused on one primary outcome, there are several components that make up this process. Along with identifying the most successful areas of the curriculum, it is inevitable that the areas that need the most improvement will be discovered as well. Therefore, this creates the opportunity to maximize the effectiveness of our curriculum by continuing aspects that are proving to lead to student success while altering areas that are not producing satisfactory student outcomes.

Defining Participant Roles and Responsibilities. Now that the objectives have been identified, participant roles and responsibilities should be determined. Not only must the primary researchers and data collectors be identified, but to create an effective process, individuals who are entering the questions into the computer-based assessment program must also be selected. Consistency is of the utmost importance when creating exam items and assessments, as the entirety of the data-gathering process depends on being able to use and understand a common language in the system. To ensure this consistent process, it is highly recommended to use only a select few to enter exam items into the system. At Oklahoma State University College of Osteopathic Medicine, typically, any content-related categorization is faculty driven; however, when items are mapped in larger discipline groups, this process can be more easily completed by staff. Fortunately, support staff tend to know which faculty members teach within each discipline; therefore, as long as questions are titled by the faculty member’s name, staff are able to complete this part of the process. This illustrates exactly why role definition for each step of this process is so important prior to beginning this study.

To add to the importance of consistency, it is best to title questions by the faculty member who created the item and the class session from which the item was written while placing these items in folders designated for each year, semester, course, and exam. This enables item categorization to be efficient and accurate, whether it is done proactively or retroactively. Next, we must make sure to identify a common nomenclature for assessments. It is key to include as much identifying information as possible when creating each assessment name to make it easily identifiable when data is collected. At a minimum, the assessment name should include the year, semester, course, exam number, and date given. Without creating a process in which the roles and responsibilities are clearly identified in a manner that suits your institution, the overall goal of positively impacting student performance on board and standardized exams is far less likely.

Identifying Data to Be Collected. Knowing the objective, data collection will come from two sources—the board exam results that are shared from the accrediting organization and internal summative exam data from our institution. Regardless of the information that is provided by each accrediting and/or state organization, it is key to map exam items in the examination software in a way that will provide reliable data to create opportunities for meaningful curricular change. For example, some medical school board exam results include a breakdown of how students perform in specific content areas such as physiology, pharmacology, pathology, etc. This dictates that categories for course assessment items are created to match these content areas. When exams are created in your curriculum, regardless of your type of curriculum, each item should then be mapped by these content disciplines. While this can be completed retroactively, it will make the most significant impact if it is done as exam items are created.

Creating a Data Review Process. It is now time to collect and aggregate the data from summative assessments. This can be done as soon as each assessment is completed. The key is to collect all data that was categorized to match the board results information received from the accrediting body. Therefore, select the appropriate categories to receive results from all assessments that fit your study. After collecting the data, we are ready to compare it to student results on the board exams. Step one of this process is to evaluate in which areas of the curriculum students are most and least successful on the board/standardized exam. It is now known which areas of our curriculum and assessments to study more closely to improve and which areas we should more closely emulate.

Implementing Curricular Change. Reviewing category data from internal summative assessments and comparing it to board statistics provides the information needed to begin the process of making tangible changes to a curriculum. Ideally, there will be a correlation between student performance in specific content areas on the boards and institutional summative assessments. However, even if this is not the case at your institution, there is still enough data here to create positive change. It is important to review all aspects of each mapped area to locate what changes need to be made.

There is no element of this process that is too small for our review. In fact, one of the changes that was made at our institution that positively impacted student board performance was as simple as balancing the quantity of exam items from each discipline on each exam. The categorization process revealed a specific discipline area was represented by only three to six exam items on many assessments in our systems-based curriculum. As students came to understand the composition of the exams, they would devote less study time to the items that were not well represented on course exams. Therefore, the lack of questions in this discipline resulted in lower exam scores on those items and, eventually, lower board exam scores in this area. Moving forward, assessments were created to be more well balanced to encourage students to actively study all disciplines throughout the didactic years of our curriculum. This simple change led to increased board scores in these content disciplines that were previously underrepresented on our exams while not negatively affecting student scores in other areas.

This example highlights the need to have a team of individuals that reviews the data from this process. Every area of curriculum and instruction needs to be evaluated to make a positive impact, including something as simple as quantity of exam item types used on assessments. Other changes that were identified as part of this evaluation process included updating the number of course hours for specific disciplines within our systems courses, reducing the number of hours of certain courses within our curriculum and redistributing them to areas of student performance that needed to be improved, adjusting content sequence within certain courses, creating a new body systems/biomedical science integration course, and creating a faculty adviser program.

RESULTS

In education, it is of the utmost importance to help students identify areas in which they can improve. As indicated in the table below, after implementing the previously mentioned changes, our institution saw an increase in student performance in all but one of the major board content areas.

Additionally, category scores increased between the classes of 2017 and 2018, which is when these changes were implemented based on exam data. This included improved average scores for the students who passed as well as those who did not pass the board exam overall.

We were highly satisfied with the results of changes we made based on our data analysis that indicated that we were able to help all students improve their board performance, including students who struggled on the board exam. Due to these positive results, we continued to collect all exam data by discipline and compare it with student board performance. All curricular and instructional changes that were previously made have either been continued or added upon with continuous positive results.

ADDED BENEFITS

This process has proven to be a healthy exercise for our institution as it has created a thorough process of program evaluation at all levels of instruction. In addition, the statistical evaluation has served as an accidental but highly valuable faculty development method. In reviewing each aspect of teaching and learning, we have been able to better emphasize the importance of each stage of the instructional process. Some examples of this include writing clear learning objectives, drafting assessment items that appropriately evaluate session objectives, and reviewing individual exam statistics as a method of evaluating instructional methods used in class. Improving classroom instruction and assessment methods is proving to better engage students with course content and positively impact scores on summative assessments, which has thus proven to positively correlate with student board performance with previous classes. This, too, will be an element of our instructional process to be evaluated when the next round of board statistics is made available.

After establishing a method to review how to positively impact student performance on board exams, we must now constantly review the process to ensure that it is sustainable moving forward. Additionally, there is a need for continued program growth. For example, the next step of development should be to evaluate how well our institutional exam results correlate with board exam results. This process has been set in place to allow us to construct all exams to best prepare students for board exams while also using our exams to identify which students are most likely to struggle on boards. The intended goal of this process is to use assessment data to create a process that supports proactive remediation for students, instead of starting a remediation study plan after a board failure.

FINAL THOUGHTS

As our students, technology use, curriculum, and educational methods evolve, we must have a program in place to appropriately evaluate each of these elements to ensure we are continuously improving student outcomes. It is absolutely vital that we always keep this goal in mind to guide our decision-making, as it is our core charge as educators and institutions. If we continuously seek to evaluate program effectiveness through the use of student assessment performance data, our institutions will be well equipped to reach our ultimate goal of preparing students to be successful at the next level of their educational careers.