Institutions with regional accreditation now have access to a great deal of information about developing assessment plans and many fine examples of general education assessments, academic department assessment plans, and a plethora of student affairs and student support assessment plans. But many institutions are lacking in the overall institutional level assessment planning that goes beyond general education. This article will describe ways to think about what the institution says it will do and ways to incorporate the mission, vision, and values of an institution into the assessment planning process.
Given that most institutions of higher education are accredited by regional accreditors and that the regional accreditors have been requiring some form of assessment for over two decades, it is safe to say that most institutions have assessment plans, data, and results for the majority of their academic and co-curricular programs. But many institutions do not look at the overall integration of assessment plans across campus. While most institutions have some type of a Mission Statement, there is often not a corresponding assessment of that Mission. Do students actually do the types of things that the Mission Statement suggests? Do students learn to think critically, solve-problems, and are they life-long learners? Many of those in the field of assessment or institutional effectiveness do not know the answers to these questions because they aren’t always a focus of the institution. Part of these may be due to the fact that most of the regional accreditors do not require assessment at the institutional level—only at the program level.
However, there are many benefits to having an institutional assessment plan. The first is that by looking at program level data and rolling that up into an institutional context, institutions can make better use of currently existing assessment data. In addition, this practice leads to collaboration across campus. This collaboration may be focused on a particular institutional outcome (enhanced critical thinking, for example) that may be part of every academic and cocurricular program. This provides the potential for assessment practices that can transform a campus. If faculty from across campus are working with student affairs professionals, for example, imagine what types of improvements could be made on an institutional level.
But, as many in the assessment field know well, assessment practices are often not viewed as a rich opportunity to grow. For example, Erik Gilbert (2015) stated that “we should no longer accept on faith or intuition that learning outcomes assessment has positive and consequential effects on our institutions—or students.” And, Robert Shireman (2016) declared that assessment is “…worthless bean-counting and cataloging exercises.” And even though many in assessment have worked diligently to focus on high quality data that is fair, reliable, and valid, Douglas Roscoe (2017) actually said that “the dysfunctionality of assessment today starts with the primacy of evidence and data.” Roscoe went on to suggest that what is really needed for improvement is more dialogue about learning and improvement with the faculty. It is an excellent point that standardized testing doesn’t provide a silver bullet in assessment, but it can still be an important tool. Kevin Gannon (2017) suggested that “because we’ve centered so much of our actual assessment practice around the fetish of outcomes, we’ve forgotten that the really important part of learning is the process that leads to those outcomes.” So, if assessment focuses on the “assessment” part rather than on the “learning” or the “improvement” part, higher education may be spending a lot of time focused on things that just don’t lead to improvement.
The assessment field has a history of authors and leaders who have warned against too tight of a focus on “data” rather than on improvement. Peter Ewell is known for having said “why do we insist on measuring it with a micrometer when we mark it with chalk and cut it with an axe?” If the focus is on the “measures” and not on the improvement, it could be that assessment misses the point. At the 2016 IUPUI Assessment conference, Tom Angelo famously said that “graduating students is not the same as educating students.”
And yet, without good quality learning outcomes, appropriate, meaningful, valid, and reliable measures, and resulting data that are important, the field of assessing student learning falls short. We need the measurement theory and we need the pedagogical discussions. Both should inform the other.
Intuitional Mission and Outcomes
There is a great need for an institution’s mission, vision, and values statements to align with the institutional goals and learning outcomes. This also means that any strategic planning that is done by the institution should include the mission, vision, and values statements in the initial discussions. These should all align so that it is clear what the intent of the institution is to do. Is the focus on citizenship? Global learning? Problem solving? Leadership? Much of this can be gleaned from within the mission, vision, and values statements. However, no mission statement is “perfect” and measuring these imperfect and usually short statements can leave out important virtues and goals of an institution. Therefore, there is a strong need for a broad-based understanding of what the institution mission statement actually means and what it would look like if students were to meet the goals addressed.
In 2005, Ross Miller and Andrea Leskes postulated the idea of “levels of assessment.” The first level was assessing an individual student within a course. Questions to be asked could include:
- Is the student learning as expected?
- Has the student work improved over the semester?
- How well has the student achieved the learning outcomes?
- What are the student’s strengths and weaknesses?
Much of what faculty do is focused on this level. However, Miller and Leskes also suggested that the second level might be looking at a particular student across courses. This is especially significant when determining whether or not a particular student is meeting the goals as outlined by a program. Academic advisors look at this type of assessment on a regular basis. And, questions might include:
- Has the student’s work improved or met standards during the program?
- How well has the student achieved the disciplinary outcomes of the major program?
- How well has the student achieved the general learning outcomes of the institution?
Miller and Leskes also suggested that there was an assessment level that focused only on courses. With this level, faculty and department chairs might ask:
- How well is the class achieving outcomes?
- Are the assignments helping student achieve the expected level?
- Are students prepared for subsequent courses?
- Is the course level appropriate?
- Is the course fulfilling its purpose in a larger curriculum?
The level of assessing programs is the next level identified and this is what is most commonly focused on in higher education assessment. Questions to be addressed at this level could include:
- Do the program’s courses contribute to outcomes?
- How well does the program fulfill its purposes in the curriculum?
- Does the program’s design resonate with outcomes?
- Are the courses organized in a coherent manner?
- Does the program advance institution-wide goals?
And, finally, Miller and Leskes identified a level of assessment that focuses on the institution. They say “institutional level assessment can be undertaken for internal improvement or to meet external accountability demands. Results of the former can often also serve the latter purpose.”
Assessment questions at the level included:
- What do the institution’s programs add up to in terms of learning?
- How well are the institution’s goals and outcomes for student learning being achieved?
- How much have student learned over their college years?
- Does the institution educate students for the workforce? Future?
Therefore, there are many ways to think about assessment that use data in order to make improvements, decisions, and overall enhancements. These are all important—data that aren’t valid or standardized tests that are badly administered cannot make for good and meaningful decisions for improvement.
In order to gather good assessment data, there must be faculty, administration, and staff collaboration. If this “culture of assessment” can be built to do that, the next step is to make sure that these data are actually used. George Kuh mentioned at the 2016 IUPUI conference that “change moves at the speed of trust,” and this is most certainly true. Change can be difficult but it is essential to at least consider the possibility of change if an institution is going to be guided by assessment results. Knowing that data are not completely free of bias, it is important to recognize the importance of trend data.
Change and improvement are not easy to do, but it is essential that programs and institutions always look for ways to continue to spiral upwards in increasing learning, teaching, and overall institutional effectiveness. Once data from course and program assessment can be linked to overall institutional goals, the entire institution can have the dialogue that is necessary for a learning and improvement paradigm.
Miller, R. and Leskes, A. (2005). Levels of Assessment: From the Student to the Institution. Association of American Colleges and Universities: Washington, D.C.
Dr. Catherine M. Wehlburg is the dean of the School of Sciences, Mathematics, and Education at Marymount University. She has served in several administrative roles including as associate provost for Institutional Effectiveness at Texas Christian University where her focus was on assessment, accreditation, and learning. She has taught psychology and educational psychology courses for more than two decades, serving as department chair for some of that time before branching into faculty development and assessment. Dr. Wehlburg served as president of the Association for the Assessment of Learning in Higher Education (AALHE) and is still a member of the AALHE Board.
This article originally appeared in the 2017 conference proceedings of Association for the Assessment of Learning in Higher Education (AALHE). Reprinted with permission. For more information, visit https://www.aalhe.org/page/Proceedings