Presented by Dr. Frederick Burrack, Director of the Office of Assessment, Kansas State University
Institutions of higher education are increasingly utilizing documentation of student application of learning as the key indicator of whether they are achieving their missions to educate students. This is in contrast to relying on course grades and/or documentation of content dissemination as identified on syllabi. Developing an institutional culture that values student learning assessment as an essential element of what we do as educators in higher education is an essential foundation for success of any institution. This webinar focuses on a few ideas on how to establish a culture that embraces student learning assessment, emphasizes program improvement in response to student learning assessment results, and views the assessment process as an essential component of education.
[00:02:15] Well, thank you for joining today, institutions of higher education increasingly use some form of documentation of student learning as a key indicator of whether they’re achieving their educational mission. Now, this is in addition to providing a unifying content dissemination, developing an institutional culture value. Student learning assessment is the focus of today’s webinar. I suppose many of your listeners are responsible for developing a culture at your institutions that a culture of assessment. What I’m referring to is an environment where assessment of student learning is valued as a mechanism to define academic success and where it recognizes that course grades include factors sometimes associated with learning, and that course content and skills taught does not automatically mean coarse content and skills learning. So this webinar will focus on a few ideas on how to establish a culture that embraces student learning assessment, emphasizes program improvement in response to student learning results, and views the assessment process as an essential component to education. In developing a university wide culture of assessment, I feel a foundational principle that propels the paradigm is recognizing the uniqueness of student learning that occurs across academic programs. Although there are learning goals that define educated graduates such as effective communication skills, application of knowledge to critical thinking and problem solving, multicultural diversity, literacy, numerical literacy, aesthetic sensitivity, etc. The ways for student learning and the skill development that are demonstrated by students in various disciplines will look different in both process and outcome.
[00:04:09] When documenting student achievement, university wide learning goals, student demonstrations of learning authentic to how learning is uniquely applied is essential. So there’s no such thing as a one size fits all assessment system. Trying to create one will inhibit a of assessment when documenting documenting student achievement of university wide goals. It’s important to authentically represent how learning is uniquely applied, and if not, you’re going to find the faculty very uncomfortable trying to comply with assessment processes that don’t often make the learning curve and their discipline. A university wide assessment system must value the uniqueness of each program, curriculum and mission. Now, if you don’t mind, I’d like to use my university’s example. What kind of case is taken to these five undergraduate learning outcomes is communication within communication? There are several forms, such as oral, written, gestural and graphic student demonstration of achievement. Communication skills is and should be applied and differently from one. So to describe this notion of disciplinary autonomy, institute learning assessment, I offer a few examples of effective written communication for an elementary teacher requires very different skills and demonstration of achievement than scientific writing for a researcher. Even within the expectations of an elementary teacher, there are multiple competencies, the skills to effectively communicate in writing to their students is very different than those that they communicate to their administrator. And both of these are different than a demonstration of achievement for students with expectations in a creative writing program.
[00:06:11] The same autonomy of assessment applies to other areas of communication, of assessment, of critical thinking and all the other areas of learning that we need to document for achievement, although it’s clear that a singular form of assessment is not capable of authentically representing student achievement in a learning area. We in the field of education often resort to a singular measure as seen in universities apply abroad standardized test, when often that’s often unassociated with the authenticity of student achievement. Now the other problem with using singular forms of assessment is that curricula in higher education is designed for student learning to progress from a broad spectrum, and they start to a more focused depth of content and application in their interest area by the time they graduate. Yet we administer a standardized measure which we have, and ask our seniors in a final semester at a point in their curriculum as most folks to take an exam that has limited associations with their learning, content or interests. And at best we might receive inferences of learning. But what results as a score of student achievement that’s often not reflective of the students have learned or can apply. So when a culture of assessment, it is important that we center our attention on the opportunities that exist in our programs, how they demonstrate student achievement. And when we do this, faculty and support staff are more likely to embrace the assessment process that reflects a clear connection to the educational purposes that’s valued within their program, our area thus contributing to a culture of assessment.
[00:07:46] The primary focus for assessment of student learning should occur within academic and nonacademic programs, student life and support units. Each program or unit has the capability of identifying student achievement, of learning outcomes with sufficient guidance, valid and reliable, direct and indirect assessments can be implemented for each outcome. Since many such assessments already exist in career and instructional practice, or need only be slightly adapted to become more become achievement, measures of specific learning and assessment process designed to provide achievement information to student learning will be valued by faculty and use of hard decisions and program on the university level. A cultural assessment requires stakeholders ownership in the process. Share with you how we come to and continue to develop a cultural assessment across Kansas State University, the primary focus assessment of student learning at Kansas State occurs within the academic and nonacademic programs, student life and support units. Each program unit has identified the appropriate student learning outcomes, develop indirect and direct assessments for each outcome and for each assessment. They’ve established minimum competency allowable for achievement and the level above that. That’s designated as meeting expected proficiencies. A document student achievement in any of the five undergraduate outcomes are identified and documented as such. So the process is designed to provide information authentic to student learning valued by the program and demonstrates achievement to be reported at the university level.
[00:09:31] The ownership and the student learning assessment is an important element that leads to developing a culture of assessment with an educational system. If program improvement is to be a goal of an assessment process, assessments that exist within the environment where students demonstrate learning are essential. This is why those involved with instructions must be directly involved with the collection and that our analysis of data. To lead toward stakeholder ownership, it is important that each program or unit designs an assessment process to collect, analyze and discuss student learning data that works within their own structure, assessments must fit into the student learning sequence. Otherwise, the resulting achievement data will have little use for program improvement. Unless a spectacularly stakeholders are involved in the analysis and discussion of the student learning data, it’s essential that assess assessments fit within the education process so students and faculty recognize the relevant. And there’s another important issue. If students don’t recognize the relevance of the assessments, then motivation for achievement is in question. Assessments anchored in curriculum and focused on student work is the most effective form of assessment. Now, similar to many models at universities, all the programs at Kansas State University, the majors, minors certificates and student life units are involved in the assessment of student learning under the umbrella of program improvement. Each program in unit reports annually on student learning. The format for the annual report is designed to leave programs to consider options.
[00:11:26] Program improvement and enhancements that can be made in the sequence is the follows. The analysis. Of assessment data is discussed among faculty and the department heads, one way to enhance this culture of continual program improvement is to include feedback beyond the program faculty.
[00:11:50] So each program unit has a program assessment committee that organizes and oversees the assessment efforts, these committees then annual report annually report their findings to the College Assessment Review Committee. In this committee, then they receive peer feedback. From the committee members and a particular assessment, reviewers on their programs plan and their assessment report, which then again increases ownership of the overall assessment process. These committees assess the annual reports, provide feedback to the programer unit, and then summarize the college and division’s assessment data to the Office of Assessment. And then when challenges occur, face to face meetings are convened to guide and enhance the assessment process and build the campus wide assessment culture. The entire sequence provides feedback and involvement of a large number of stakeholders. Now, the usefulness of programatic assessment data for the institution comes through the alignment of the institutional outcomes, if comparable levels of minimal acceptable achievement and professional achievement are identified within each program and reported according to these levels, then the data from all of the programs can be combined to provide the summary of university wide student achievement for each instructional institutional outcome. The student learning data from each programmer unit and the college division report is reviewed annually by the Office of Assessment, who then provides additional feedback on reform through face to face meetings with programs and to college administration to enhance the entire process. The assessment data, all reflecting the university outcomes, are combined to provide a summary of university wide student learning for each of the university outcomes in our assessment system.
[00:13:58] The relevance of assessment data is directly tied to student learning activities with which the students are engaged. So responsible responsibility and ownership of student learning are embraced by all those involved with the instruction. And again, I want to stress that without ownership or shared responsibility, I don’t believe a culture assessment is even possible. A system that utilizes the data directly tied to learning activities with which students are engaged creates a picture of assessment for the institution that is authentic, while still encouraging ownership and acceptance among those involved with instruction. Now, one other element that impacts institutional assessment culture is its value. It places on multiple assessment measures. Now, in addition to course based measures, other measures of learning are requested from programs or units such as national norm tests to support the data of program assessments. A mistake often made is choosing only one mechanism for assessment or one way of representing student learning when this is done, especially when relying only on standardized measures to learning, will most certainly be misrepresented by those assessments and creating a culture of assessment. We’ve discussed using a variety of assessments, authentic learning experiences, centering the processes in programs and involving as many stakeholders as possible. Now, let me share with you another mechanism that ties all these things together and enhances a culture of assessment on our campus. And this is a university assessment, facilitator’s committee.
[00:15:41] Representatives from each college in division discuss the assessment issues and share effective approaches to. Assessment, monthly assessment meetings, this committee plans and oversees initiatives to enhance the culture of assessment across the university and works as a conduit between the college division assessment committees. The university assessment, facilitator’s committee and the Office of Assessment hosted an annual assessment showcase each spring through which the programs share examples of best practice for colleges to showcase recognizers effective assessment measures with a framed certificate Zimet to the programs by the provost and senior vice president. In addition to further establish a shared assessment, culture, faculty and unit representatives are intentionally asked and financially supports when possible to attend conferences on assessment of student learning and effective learning. In other words, we try to value the ownership they have in the assessment process. The period of time we’re hoping to develop is that assessment of student learning is an organized mechanism of self review and program improvement. University assessment is implemented for the purposes of enhancing student learning and not as a requirement to fulfill a university initiative. So, again, what is a cultural assessment when student learning? Is the focus programs are encouraged to involve their own assessment paradigm and the paradigm I’m referring offering is the overall belief that assessment of student learning is an essential element of education, that the primary purpose of assessment is for program improvement and not to comply with university requirements.
[00:17:42] You know, like talking about that a lot of time, probably go to programs and they have this assessment process that doesn’t seem to be working. I usually ask them, why did you create that? And they come back to me and say, well, I guess that’s what I thought you wanted. And I usually say, does it work for you? And they say, no. I say, well, then stop doing it. They don’t need to be doing things that aren’t going to be useful to them, I say I don’t know what’s going to be useful to you. What do you want to know about your student learning? And then we design that and use that as a process and it becomes much more useful and they buy into the whole process because it is something that’s valuable to them.
[00:18:21] Now, in reference to teaching, it should not be presumed that learning has occurred because contentless taught. And it also considers assessment as integrated through student application of learning and authentically capturing evidence of student learning.
[00:18:43] Now, we also stress that student learning assessment is to reflect application of knowledge. And the development of cognitive skills, as well as the dispositions, those are the way a professional or an educated person should think and act, as well as the workplace readiness areas, and that the ownership of student learning assessment of longs to the programs or unit. But is necessary for university and program accreditation rather than the other way around. The ownership of the assessment process remains with the program units, although suggestion and guidance from the Office of Assessment provide direction and focus. Most importantly, when the focus remains on student learning instead of the mechanism of assessment, the educational environment, curriculum and instruction is seen as a means toward that end of assessment. So the culture of assessment desire for higher education must be sustained from any evidence to documented learning of authentic. That is authentic. And not from the generalities of training, from assessments that are unrelated, to look at the fact that students actually work in a common purpose of assessment and build, develop and flourish.
[00:20:31] I think we can now do a little bit of discussion and share some of the questions you might have and talk a little bit more about developing this culture of assessment, being put into ownership within the to the faculty, into the programs where the learning actually occurs.
[00:20:50] We’re going to go ahead and open up the forum for a Q&A session. Now, once again, if you have questions you’d like to pose to Fred, these ahead, you can take some portion of the go to gun control panel. Should be on the right hand side of your screen. If you type them in there, I’ll go ahead and read them out. It looks like we already have a couple in here, so I’m going to go ahead and get things kick off.
[00:21:16] Dr. Recorrect, the first question is, what would you advise for creating a real change within a program?
[00:21:29] I think that I think that’s about the primary. What do they value with in the learning that is to occur in their program? I found out when I’ve gone to some programs and talked to them about either the assessment plan that they have, the bookkeeper or the programs haven’t even the idea they want students to learn. What they think about is what we want to talk about this course and we want this course and we want to make sure we teach this. And sometimes I haven’t even thought about what order they want it to come in so it becomes effective for student learning. So asking that question, what do you as a program value within your discipline? Because that’s one thing they do. Do they value what they do? They value their content area. And once they start thinking about this is what’s really important in response or as a result of taking these particular courses, this is what we want our students to know, or this is what we want our students to be able to do what we know. And also with that, this is what we want our students to think, like when they are as an educated person, as they leave from our program.
[00:22:47] Once they start thinking in that way, then I’ve seen programs asking themselves more questions about how how well our students doing in these areas, and they develop this need and desire to want to find it out. And at that point, it’s not until that point that I’ve noticed that program start making changes and then the changes are inherent because of the particular need that they’re discovering. And I believe that is the foundation of a culture of assessment. It’s within that ownership within the educational unit as compared to and we do this as well, that we put on a big standardized test and we give it to a select number of people and we get the results and we give it back to the faculty of the programs and they don’t see any value in it. But once we moved away. Well, although we still do it, we use that data to say the CLA is what we use. We use that data to support what we’re finding from our core space and program based assessment rather than the other way around. I hope I answered that sufficiently.
[00:24:02] Absolutely, the next question is going to be, how do you go through the process of evaluating feedback from assessment and helping?
[00:24:18] Remedy or implement changes, do you have a sit down meeting?
[00:24:23] Yes, let’s go back to the first part of that would be the feedback that is presented. This comes from peer to peer groups, from the College Assessment Review Committee. So whatever we have nine colleges in our campus and our largest one is arts and science, which then we have three different disciplines divided into three different groups, peers from those groups. There are there are two sets of peers that review each of the annual reports and provide feedback, as well as the Office of Assessment that provides feedback to every program across the university. That feedback is meant to be analyzed not by the Office of Assessment, but by the program themselves. So that’s that initial paradigm that had to be shifted here is that they thought the Office of Assessment was responsible for all evaluation of about I hate to use that word evaluation assessment of their their particular data. And really, that’s not our purpose. Our purpose is to design a system where they will annually look at the information that’s valuable to them and make decisions. Now, the second part of the question is how do we help them to move forward to make use of this feedback? What I meet or I make myself available and my other people in my office to go to, in other words, go to their home, go to their faculty meetings, or go to their assessment meetings, sometimes it’s an individual who will contact me and we will talk back and forth and they will ask me questions, the feedback, and I’ll provide some ideas. And most of it’s a brainstorming session. I want the real the answers, whatever their answers are, to come from inherent within their needs in their program. But it’s asking a lot of questions. Quite often I think the most effective is to go right to them, go to their their assessment meetings. And sometimes when they don’t have questions, I just go to their meetings and sit and listen, and it gives them the opportunity to be able to ask those questions that are important to them.
[00:26:34] Great, thanks. The next question is, what was your process, what was your process for initiating your process, campus wide department by department, small groups of faculty or all of the above? Is there a sequence that is more effective?
[00:26:48] Yeah, I actually do believe there is a sequence which is more effective. Let me tell you, I think this is where a lot of the universities and colleges have come. A point is we had a bad evaluation back 10, 14 years ago, on 12 years ago, something like that for from our university wide assessment or accreditation. And they said you have to create this assessment model. So the model was created. I, I wasn’t involved in the creation of this itself, but the overall plan was designed where each program would design their program outcomes. This was organized from college to college and program to program to try to provide that ownership or make it authentic, a real to them, rather than saying everyone has to look this. And that was designed probably maybe 40 percent of the people on on track with it once. And so one of the questions you had to and there is, is there a sequence to do it? It has to be something that that’s expected of all the programs, although it can’t be forced. In a process can can’t be forced within someone else’s program unless they see a relevance and a fit to it. So the way I usually start new initiatives is to move with those programs that see see the initial value and to use development within this constituent group and use them as models for some of the others that are sort of on the on the fence about it. And then pretty soon they will jump on the bandwagon and see how this works because they sort of needed some guidance or instruction or some ideas. And pretty soon we get a large group and those who are most against it are the only ones left.
[00:28:49] And pretty soon they move into it because they don’t want to be the only ones not involved. And it feels awkward. So it becomes again, once again a culture that the whole campus buys into, not because we made them, because they see a particular value in it. And there’s that peer support that’s within the process. There’s one other thing I want to mention about that. For those programs that I I would call them initially noncompliant, I’d like to say right now we don’t have any of those. Every program has the reports now and is involved, some at different levels and different qualities. But everyone is fully involved in this assessment process. But five years ago, when I came into this position, we had about 40 percent, which were simply non-compliant. We got no reports from at all. And I stimulated about this earlier as I went to their programs and I asked them, this is what don’t you like about this process? And I wanted to let them sit and to share the things that they don’t like. And I said, is this useful to you? And they said, no, that’s not what they want to do. It says, we’ll stop doing it. Then I said, don’t do it. Don’t do it. And at that point, actually, they were rather shocked that I didn’t force my issue with them. And I said, now, is it important to you what what your students learn in your program? They said, of course. And I said, OK, now let’s talk about what’s important to you. And from that point, we worked our way into the into fitting within their needs.
[00:30:26] And just maybe it wasn’t the full program at first, but we found out what was valuable to them. Once they found something that worked, that fit within their structure and was valued, the rest of the expectations appeared to just show up and that started to develop. One last thing I want to add to that, because I think it might be valuable for people to know we do not require everyone to report in exactly the same format. We do ask that they decide their benchmarks for all of their assessments. What is the minimum level that is acceptable for their program, although they may not even graduate people who don’t reach that level? We want I think we want them to know where the students are. And then above that, what is the level that they deem is their expectation or meeting the standard or proficient or whatever you want to call it. So in other words, they at least divide their students between not meeting the minimum meeting minimum, but not reaching proficient and proficient or higher. Now, not all programs like to do it that way in some programs like the use means scores, and in that way it’s almost nearly impossible to determine. Who in which students are not meeting particular proficiency levels, but that’s where they find the most meaning for their program and that’s the most important thing, is that it doesn’t always match what we want or what I’m going to use. What’s most important is what they’re doing. Is it finding information that’s useful to them for program improvement? And then we build upon that that hopefully that answered that question.
[00:32:04] Great, thanks, Fred. This is a little bit of a follow up question, would you suggest a top down or bottom up approach to driving this type of change at an institution or a specific department within an institution?
[00:32:19] There has to be leadership. So when you’re talking about bottom up or top down. Things a system, a system has to be organized in some ways, if you want to call that top down, you can call that top down if you want to. But there has to be someone who makes decisions. There has to be someone who goes out and and directs the whole initiative. There has to be a support from the upper administration to say, listen, everyone has to do this and you might have to do it even if you don’t want to do it for now. If you call that top down, whatever, I’m going to call that leadership instead.
[00:33:02] But the the actual purpose should not be top down. It needs to be bottom up from within the programs. Now, that’s what I’m thinking about, more of the within the culture, although we need assessment of accrediting agencies, say we have to have this assessment process. And this is the paradigm, which is that’s what makes the 21st century unique. In the 20th century, we had a paradigm where education was was focused on delivery and we looked at syllabi to know whether the right content was delivered. And then they would say, OK, your program is good or your your university spying and you can get accredited. That’s not good enough anymore. What it does is it’s very clear that just because you teach it, just because you disseminate it doesn’t mean the students are learning. So it’s now what do the students coming out with in addition? So that’s the twenty first century paradigm. And I think it’s a good one. But within this that the respecting the program’s autonomy and this is where the bottom up comes, that it’s their program, it’s their assessment process, it’s for them to find out how will their students are learning and the things that they see as most valuable now on the university side.
[00:34:26] I have to report on the university university wide achievement in the university outcomes, forms of communication and critical thinking, et cetera, all those things. Now, most of the things they’re doing in programs actually reflect those very things. So it would be silly for me to to come top down and say, listen, you’re doing all this assessment, but we need to do it for university level. So we’re going to create another level of assessment that you’re going to have to do. Well, it’s already there. And that’s where a lot of the disrespect comes across campus, is by layering things on top of teachers that already exist. If we can mine the data out and find ways to be able to analyze it and come to this understanding of how our students are doing in written communication, in oral communication, in the ways that they authentically demonstrate in their areas. OK, I would call that the bottom up, but I think there’s a combination of both.
[00:35:28] Perfect, thanks. Looks like the last question we have for you this afternoon, and unless we get another one here and in the next few moments, do you feel there is an advantage of using a technology software system, rallying point, you know, to drive this type of assessment culture?
[00:35:47] Absolutely. Doesn’t have to pay to use the technology. You know, when we started, it was all paper paper reports that were sent to us and we’d have to keep track. And we kept them in files and we sent our feedback back on paper. That took so long. By the time we got feedback back to the programs, it was nearly a year and a half after the time the program had collect the data. And it really was no useful then because they’ve already went to another psychopomp even through the cycle. Once we integrated the technology where the student achievement data could be reported immediately, as soon as they find that data or after they had a chance to talk about it. The faculty. So, for example, you have an academic year that finishes in June or into May. So when the faculty come back in August, they look at the data from the year before and reported already that fall. The reviewers immediately can provide their feedback and get it back to the faculty still within that fall. And those faculty then can make changes to their current assessment plan and make improvements. That’s what technology has done for us. We’re able to move forward with that. Now, there’s another step, and I hope it’s not a shameless plug, for example. But Mosaad, anyway, exams with that type of a technology to be able to not only have to collect the data and then reported it another mechanism, but to be able to tie into a line actual student achievement data and pull it back in the reports that you need even immediately upon the end of a semester without minding it out of lots of different assessment tests, tests, et cetera, that that would make it so much more efficient.